SWFOC: proven-core extender reconciliation + ROE hard gate#49
SWFOC: proven-core extender reconciliation + ROE hard gate#49Prekzursil wants to merge 9 commits intomainfrom
Conversation
|
Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔 |
|
Reviewer automation could not request a non-author reviewer for this PR.
Next steps:
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughAdds R&D tooling, JSON schemas and validators, SDK capability and routing models, a native named‑pipe extender skeleton (C++ bridge + plugins), runtime backend routing and SDK operation services, profile variant resolution, many tests and live run artifacts, CI schema‑smoke steps, and documentation/tooling for research/repro bundles. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant MainViewModel
participant ProfileVariantResolver
participant BinaryFingerprintService
participant CapabilityMapResolver
participant ProfileRepository
Client->>MainViewModel: AttachAsync(requestedProfileId)
MainViewModel->>ProfileVariantResolver: ResolveAsync(requestedProfileId, processes)
alt requestedProfileId == universal_auto
ProfileVariantResolver->>BinaryFingerprintService: CaptureFromPathAsync(modulePath)
BinaryFingerprintService-->>ProfileVariantResolver: BinaryFingerprint
ProfileVariantResolver->>CapabilityMapResolver: ResolveDefaultProfileIdAsync(fingerprint)
CapabilityMapResolver-->>ProfileVariantResolver: defaultProfileId
ProfileVariantResolver->>ProfileRepository: GetProfileAsync(defaultProfileId)
ProfileRepository-->>ProfileVariantResolver: ConcreteProfile
else
ProfileVariantResolver->>ProfileRepository: GetProfileAsync(requestedProfileId)
ProfileRepository-->>ProfileVariantResolver: ConcreteProfile
end
ProfileVariantResolver-->>MainViewModel: ProfileVariantResolution
MainViewModel-->>Client: AttachComplete(effectiveProfile)
sequenceDiagram
participant Caller
participant SdkOperationRouter
participant ProfileVariantResolver
participant CapabilityMapResolver
participant SdkExecutionGuard
participant RuntimeAdapter
participant Extender
Caller->>SdkOperationRouter: ExecuteAsync(SdkOperationRequest)
SdkOperationRouter->>ProfileVariantResolver: ResolveAsync(profileId,...)
ProfileVariantResolver-->>SdkOperationRouter: ProfileVariantResolution
SdkOperationRouter->>CapabilityMapResolver: ResolveAsync(fingerprint, operationId, anchors)
CapabilityMapResolver-->>SdkOperationRouter: CapabilityResolutionResult
SdkOperationRouter->>SdkExecutionGuard: CanExecute(resolution, isMutation)
alt Allowed
SdkOperationRouter->>RuntimeAdapter: ExecuteAsync(routedRequest)
RuntimeAdapter->>Extender: SendCommand (NamedPipe)
Extender-->>RuntimeAdapter: ExtenderResult
RuntimeAdapter-->>SdkOperationRouter: ActionExecutionResult
SdkOperationRouter-->>Caller: SdkOperationResult(success)
else Blocked
SdkOperationRouter-->>Caller: SdkOperationResult(failed, reason)
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||||||||||
Codacy's Analysis Summary0 new issue (≤ 0 issue) Review Pull Request in Codacy →
|
There was a problem hiding this comment.
💡 Codex Review
SWFOC-Mod-Menu/src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs
Lines 264 to 266 in eb697d3
This fallback now runs for every profile when no direct ExeTarget match is found, so attaching a non-FoC profile (for example base_sweaw) can incorrectly bind to a running StarWarsG.exe FoC process instead of returning ATTACH_NO_PROCESS. That leads to resolving symbols and executing actions against the wrong game binary, which is a safety/accuracy regression. The fallback should be gated to FoC-specific attaches (e.g., profile.ExeTarget == ExeTarget.Swfoc).
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| var attachedProfile = _attachedProfile | ||
| ?? await _profileRepository.ResolveInheritedProfileAsync(request.ProfileId, cancellationToken); | ||
| var capabilityReport = await ProbeCapabilitiesAsync(attachedProfile, cancellationToken); |
There was a problem hiding this comment.
Skip unconditional extender probing before route selection
Every action execution now probes extender capabilities before routing, even for actions that will ultimately run on memory/helper/save backends. In the common environment where the named-pipe bridge is not running, this incurs repeated pipe connect timeouts before each action and makes non-extender operations stall noticeably. Probe extender capabilities only when extender is a possible route (or cache probe failures) to avoid this runtime latency regression.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
docs/LIVE_VALIDATION_RUNBOOK.md (1)
38-44:⚠️ Potential issue | 🟡 MinorOptional ROE scope run omits
-Strict, creating inconsistent failure semantics vs. the hard gate.Line 42's optional run can silently swallow conditions that the formal ROE hard gate (lines 29-36) would fail on. An analyst who runs the optional command expecting parity with the gate will get a different failure surface.
Consider annotating the optional runs with a comment that
-Strictand-RequireNonBlockedClassificationare intentionally omitted for quick diagnostics, so the asymmetry is explicit rather than accidental.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/LIVE_VALIDATION_RUNBOOK.md` around lines 38 - 44, The optional ROE scope invocation of pwsh ./tools/run-live-validation.ps1 omits the -Strict and -RequireNonBlockedClassification flags, causing different failure semantics vs. the formal ROE hard gate; update the docs by adding a brief inline note next to the optional ROE command (pwsh ./tools/run-live-validation.ps1 -NoBuild -Scope ROE -EmitReproBundle $true) stating that -Strict and -RequireNonBlockedClassification are intentionally omitted for quicker diagnostics and that the hard gate uses those flags for strict checks, so users understand the asymmetry.tools/validate-policy-contracts.ps1 (1)
40-70:⚠️ Potential issue | 🟠 MajorAdd
docs/LIVE_VALIDATION_RUNBOOK.mdto the$requiredFilesarray.The file exists in the repo but is missing from the policy contract validator at lines 40-70. Since similar runbooks (
docs/MOD_ONBOARDING_RUNBOOK.md,docs/RELEASE_RUNBOOK.md) are already in the contract, this file should be added to ensure future deletions are caught by CI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/validate-policy-contracts.ps1` around lines 40 - 70, The $requiredFiles array in validate-policy-contracts.ps1 is missing docs/LIVE_VALIDATION_RUNBOOK.md; update the $requiredFiles variable (the array initializer named $requiredFiles) to include "docs/LIVE_VALIDATION_RUNBOOK.md" alongside the other runbook entries (e.g., "docs/MOD_ONBOARDING_RUNBOOK.md", "docs/RELEASE_RUNBOOK.md") so the policy contract validator will check for that file.profiles/default/profiles/base_swfoc.json (1)
77-88:⚠️ Potential issue | 🟡 Minor
executionKind: "Sdk"is a hard-gate change:set_creditssilently blocks for users without the native extender.Previously Memory-backed, this action now routes exclusively through the SDK/Extender path. Per the architecture, if the extender is unavailable the route is
SAFETY_MUTATION_BLOCKED(fail-closed). WithbackendPreference: "auto"andrequiredCapabilities: ["set_credits"], there is no silent fallback to the old memory path. Confirm that:
- A user-visible error message (not just a reason code) is surfaced when the extender is absent.
- The upgrade/deployment notes for this profile change document the extender prerequisite.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@profiles/default/profiles/base_swfoc.json` around lines 77 - 88, The change sets executionKind for the action "set_credits" to "Sdk", which will block users if the native extender is missing; update the profile to surface a clear user-visible error and document the extender prerequisite: modify the "set_credits" action handling so that when executionKind == "Sdk" and the extender is unavailable it returns a user-facing error message (not just SAFETY_MUTATION_BLOCKED) and/or provide a fallback by adjusting "backendPreference" (e.g., allow memory fallback) or adding a conditional route; also add upgrade/deployment notes documenting the extender requirement and mention requiredCapabilities ["set_credits"] so operators are aware.src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs (1)
145-253:⚠️ Potential issue | 🟠 MajorAdd unit tests for RuntimeAdapter's attach variant resolution and mode probing orchestration.
RuntimeAdapter now orchestrates variant resolution (lines 155–168) and runtime mode probing (lines 206–212) in AttachAsync, attaching diagnostic metadata (runtimeModeHint, runtimeModeEffective, runtimeModeReasonCode, etc.). Per coding guidelines, attach/mode/resolution behavior changes in runtime files must include tests.
RuntimeAttachSmokeTests.cs exists but only verifies basic attach/detach with an explicit profile ID, skipping the variant resolution code path entirely. Component-level tests cover ProfileVariantResolver, RuntimeModeProbeResolver, and BackendRouter in isolation, but no integration test verifies their orchestration within RuntimeAdapter or the routing dispatch in ExecuteAsync (lines 828–860).
Add tests that verify:
- Universal profile resolution path in AttachAsync
- Mode probing metadata attachment and effective mode selection
- ExecuteAsync backend routing dispatch with capability probing fallback
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs` around lines 145 - 253, Add integration unit tests for RuntimeAdapter to cover the universal profile variant resolution and runtime mode probing orchestration and ExecuteAsync routing: write tests that call AttachAsync (via Run/Arrange/Assert) with profileId equal to ProfileVariantResolver.UniversalProfileId and a mocked _profileVariantResolver that returns a ProfileVariantResolution with ResolvedProfileId, ReasonCode, Confidence and optional FingerprintId, then assert the returned AttachSession (or CurrentSession) process metadata contains "requestedProfileId", "resolvedVariant", "resolvedVariantReasonCode", "resolvedVariantConfidence" (and fingerprint when provided); also mock _runtimeModeProbeResolver to return a ModeProbe with HintMode, EffectiveMode, ReasonCode and signal counts and assert metadata keys "runtimeModeHint", "runtimeModeEffective", "runtimeModeReasonCode", "runtimeModeTacticalSignals", "runtimeModeGalacticSignals" and that the session process Mode equals the EffectiveMode; finally add a test for ExecuteAsync that sets up mocked backends and capability probing fallback (mock router/resolver to force capability probe path) and assert ExecuteAsync dispatches to the expected backend method (or route) when capability probing is used. Use dependency injection/mocks for _profileRepository, _signatureResolver, _runtimeModeProbeResolver, _profileVariantResolver and backend router, and verify metadata on the AttachSession.Process.Metadata and routing behavior from ExecuteAsync.
| class NamedPipeBridgeServer { | ||
| public: | ||
| using Handler = std::function<BridgeResult(const BridgeCommand&)>; | ||
|
|
||
| explicit NamedPipeBridgeServer(std::string pipeName); | ||
|
|
||
| void setHandler(Handler handler); | ||
| bool start(); | ||
| void stop(); | ||
| bool running() const noexcept; | ||
|
|
||
| private: | ||
| void runLoop(); | ||
| BridgeResult handleRawCommand(const std::string& jsonLine) const; | ||
|
|
||
| std::string pipeName_; | ||
| Handler handler_; | ||
| std::atomic<bool> running_ {false}; | ||
| std::thread worker_; | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the implementation file and check for destructor
find . -name "NamedPipeBridgeServer.cpp" -o -name "*NamedPipeBridgeServer*.cpp" | head -20Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 128
🏁 Script executed:
#!/bin/bash
# Check for destructor in the implementation file
grep -n "~NamedPipeBridgeServer\|destructor" ./native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp
# Also show overall file to understand structure
wc -l ./native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cppRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 132
🏁 Script executed:
#!/bin/bash
# Read the entire cpp file to check for destructor and thread handling
cat -n ./native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cppRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 10176
Missing destructor — std::thread worker_ will call std::terminate() if joinable at destruction.
NamedPipeBridgeServer holds a std::thread worker_ but declares no destructor. If the object is destroyed without calling stop() first, the thread remains joinable and the default destructor will invoke std::terminate(), crashing the application.
While stop() properly joins the thread when called (lines 160–162 of the .cpp file), there is no guarantee it will be invoked before destruction. Add an explicit destructor to the header that ensures the thread is cleaned up.
Proposed fix
bool start();
void stop();
bool running() const noexcept;
+ ~NamedPipeBridgeServer();
private:And implement in the .cpp file:
NamedPipeBridgeServer::~NamedPipeBridgeServer() {
stop();
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@native/SwfocExtender.Bridge/include/swfoc_extender/bridge/NamedPipeBridgeServer.hpp`
around lines 30 - 49, The class NamedPipeBridgeServer lacks a destructor and its
member std::thread worker_ can be joinable at object destruction causing
std::terminate; add an explicit destructor (~NamedPipeBridgeServer) to the class
declaration and implement it to call stop() so the runLoop/worker_ is stopped
and joined safely (ensure stop() is noexcept-safe to call from a destructor and
references handler_/running_ correctly); update the header to declare
~NamedPipeBridgeServer() and the .cpp to define it as calling stop().
| void NamedPipeBridgeServer::stop() { | ||
| if (!running_.exchange(false)) { | ||
| return; | ||
| } | ||
|
|
||
| #if defined(_WIN32) | ||
| auto fullPipeName = std::string("\\\\.\\pipe\\") + pipeName_; | ||
| auto deadline = std::chrono::steady_clock::now() + std::chrono::milliseconds(800); | ||
| while (std::chrono::steady_clock::now() < deadline) { | ||
| auto client = CreateFileA( | ||
| fullPipeName.c_str(), | ||
| GENERIC_READ | GENERIC_WRITE, | ||
| 0, | ||
| nullptr, | ||
| OPEN_EXISTING, | ||
| 0, | ||
| nullptr); | ||
| if (client != INVALID_HANDLE_VALUE) { | ||
| CloseHandle(client); | ||
| break; | ||
| } | ||
| std::this_thread::sleep_for(std::chrono::milliseconds(25)); | ||
| } | ||
| #endif | ||
|
|
||
| if (worker_.joinable()) { | ||
| worker_.join(); | ||
| } | ||
| } |
There was a problem hiding this comment.
stop() can deadlock: if the busy-poll times out before successfully connecting, worker_.join() blocks indefinitely while the worker is still in ConnectNamedPipe.
The 800 ms window assumes the pipe is created and available within that time. If CreateNamedPipeA is failing on the worker side (sleep-retry path, lines 237–239), the pipe may not exist yet when stop() gives up — and worker_.join() will hang. The idiomatic fix for Windows is CancelSynchronousIo, which directly cancels the blocking ConnectNamedPipe on the worker thread, causing it to return with ERROR_OPERATION_ABORTED. The existing !connected branch already handles this gracefully (close, continue, then running_.load() returns false).
🔧 Proposed fix using CancelSynchronousIo
void NamedPipeBridgeServer::stop() {
if (!running_.exchange(false)) {
return;
}
`#if` defined(_WIN32)
- auto fullPipeName = std::string("\\\\.\\pipe\\") + pipeName_;
- auto deadline = std::chrono::steady_clock::now() + std::chrono::milliseconds(800);
- while (std::chrono::steady_clock::now() < deadline) {
- auto client = CreateFileA(
- fullPipeName.c_str(),
- GENERIC_READ | GENERIC_WRITE,
- 0,
- nullptr,
- OPEN_EXISTING,
- 0,
- nullptr);
- if (client != INVALID_HANDLE_VALUE) {
- CloseHandle(client);
- break;
- }
- std::this_thread::sleep_for(std::chrono::milliseconds(25));
- }
+ if (worker_.joinable()) {
+ CancelSynchronousIo(static_cast<HANDLE>(worker_.native_handle()));
+ }
`#endif`
if (worker_.joinable()) {
worker_.join();
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 135 -
163, stop() can deadlock because the busy-poll may time out before the worker's
CreateNamedPipeA/ConnectNamedPipe runs, leaving the worker blocked in
ConnectNamedPipe; to fix, on Windows call CancelSynchronousIo on the worker
thread's native handle before calling worker_.join() so the blocked
ConnectNamedPipe returns with ERROR_OPERATION_ABORTED and the worker can exit;
update NamedPipeBridgeServer::stop() to, after the CreateFileA polling loop and
before worker_.join(), check if worker_.joinable() and running_.load()==false
and then call CancelSynchronousIo(worker_.native_handle()) (guarded by `#if`
defined(_WIN32)), optionally check/ignore its return value, then join the
worker; reference symbols: stop(), worker_, ConnectNamedPipe, CreateFileA,
CreateNamedPipeA, running_.exchange/running_.load(), CancelSynchronousIo,
worker_.native_handle().
| var variant = await _profileVariantResolver.ResolveAsync(request.ProfileId, cancellationToken: cancellationToken); | ||
| var fingerprint = await _binaryFingerprintService.CaptureFromPathAsync(processPath, processId, cancellationToken); | ||
|
|
||
| var anchors = ExtractResolvedAnchors(request.Context); | ||
| var capability = await _capabilityMapResolver.ResolveAsync( | ||
| fingerprint, | ||
| variant.ResolvedProfileId, | ||
| request.OperationId, | ||
| anchors, | ||
| cancellationToken); |
There was a problem hiding this comment.
CaptureFromPathAsync and ResolveAsync on the capability map can throw — unhandled exceptions will bypass diagnostics logging.
Lines 91, 94-99 call external services that may throw (process exited, file not found, I/O error). Unlike other error paths in this method, these exceptions propagate without writing to _sdkDiagnosticsSink, breaking the invariant that every ExecuteAsync call produces a diagnostic record.
🛡️ Proposed fix: wrap in try/catch and emit a diagnostic result
var variant = await _profileVariantResolver.ResolveAsync(request.ProfileId, cancellationToken: cancellationToken);
- var fingerprint = await _binaryFingerprintService.CaptureFromPathAsync(processPath, processId, cancellationToken);
- var anchors = ExtractResolvedAnchors(request.Context);
- var capability = await _capabilityMapResolver.ResolveAsync(
- fingerprint,
- variant.ResolvedProfileId,
- request.OperationId,
- anchors,
- cancellationToken);
+ BinaryFingerprint fingerprint;
+ try
+ {
+ fingerprint = await _binaryFingerprintService.CaptureFromPathAsync(processPath, processId, cancellationToken);
+ }
+ catch (Exception ex)
+ {
+ var captureError = new SdkOperationResult(
+ false,
+ $"Failed to capture binary fingerprint: {ex.Message}",
+ CapabilityReasonCode.RuntimeNotAttached,
+ SdkCapabilityStatus.Unavailable);
+ await _sdkDiagnosticsSink.WriteAsync(request, captureError, cancellationToken);
+ return captureError;
+ }
+
+ var anchors = ExtractResolvedAnchors(request.Context);
+ CapabilityResolutionResult capability;
+ try
+ {
+ capability = await _capabilityMapResolver.ResolveAsync(
+ fingerprint,
+ variant.ResolvedProfileId,
+ request.OperationId,
+ anchors,
+ cancellationToken);
+ }
+ catch (Exception ex)
+ {
+ var resolveError = new SdkOperationResult(
+ false,
+ $"Capability resolution failed: {ex.Message}",
+ CapabilityReasonCode.UnknownSdkOperation,
+ SdkCapabilityStatus.Unavailable);
+ await _sdkDiagnosticsSink.WriteAsync(request, resolveError, cancellationToken);
+ return resolveError;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/SwfocTrainer.Core/Services/SdkOperationRouter.cs` around lines 90 - 99,
The calls to _binaryFingerprintService.CaptureFromPathAsync and
_capabilityMapResolver.ResolveAsync can throw and are not currently wrapped,
which allows exceptions to bypass _sdkDiagnosticsSink; update the ExecuteAsync
flow in SdkOperationRouter (around the CaptureFromPathAsync and ResolveAsync
calls) to wrap those awaits in a try/catch that catches Exception, emits a
diagnostic result to _sdkDiagnosticsSink (matching the shape/fields used by
other error paths), and returns/short-circuits the method; ensure you include
relevant context (request.OperationId, request.ProfileId, processId/processPath,
and exception) in the diagnostic and preserve cancellation token usage when
calling the services.
| private static Task<IReadOnlyList<string>> TryGetLoadedModulesAsync(int? processId, CancellationToken cancellationToken) | ||
| { | ||
| if (!processId.HasValue || processId.Value <= 0) | ||
| { | ||
| return Task.FromResult<IReadOnlyList<string>>(Array.Empty<string>()); | ||
| } | ||
|
|
||
| try | ||
| { | ||
| cancellationToken.ThrowIfCancellationRequested(); | ||
| using var process = Process.GetProcessById(processId.Value); | ||
| var modules = process.Modules | ||
| .Cast<ProcessModule>() | ||
| .Select(x => x.ModuleName) | ||
| .Where(x => !string.IsNullOrWhiteSpace(x)) | ||
| .Distinct(StringComparer.OrdinalIgnoreCase) | ||
| .OrderBy(x => x, StringComparer.OrdinalIgnoreCase) | ||
| .ToArray(); | ||
| return Task.FromResult<IReadOnlyList<string>>(modules); | ||
| } | ||
| catch | ||
| { | ||
| return Task.FromResult<IReadOnlyList<string>>(Array.Empty<string>()); | ||
| } |
There was a problem hiding this comment.
Silent catch in TryGetLoadedModulesAsync buries diagnostics, violating the "record actionable reason codes" guideline.
The bare catch at line 97 swallows Win32Exception (elevated privilege required), InvalidOperationException (process exited mid-enumeration), and other process-inspection failures with no trace. Since the method is private static it has no logger, so failures are completely invisible.
Options:
- Make the method an instance method (or accept an
ILoggerparameter) so failures can be logged atDebug. - Bubble the exception up to
CaptureFromPathAsyncwhere the logger is available, then handle and log there.
This is especially important given that privilege failures (the most common real-world failure mode) look identical to "no process provided" in the current implementation. Based on learnings, runtime operations must record actionable reason codes in diagnostics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/SwfocTrainer.Runtime/Services/BinaryFingerprintService.cs` around lines
77 - 100, TryGetLoadedModulesAsync currently swallows all exceptions; change it
to surface diagnostics by adding an ILogger parameter (or making it an instance
method so you can access the class logger) and log any exception at Debug with
an actionable reason code and exception details instead of a bare catch;
specifically, update the signature TryGetLoadedModulesAsync(int? processId,
CancellationToken cancellationToken, ILogger logger) (or instance method using
the existing ILogger), call cancellationToken.ThrowIfCancellationRequested() as
before, and in the catch block log logger.LogDebug("TryGetLoadedModulesAsync:
Failed to enumerate modules for PID {processId}; reason={ReasonCode}",
processId, "<Win32Exception|InvalidOperation|Other>") including the actual
exception object, then return the empty list only after logging; alternatively,
if you prefer bubbling, remove the catch and let the exception propagate to
CaptureFromPathAsync where the logger exists and then log the same actionable
reason codes there.
| function Invoke-WslBuild { | ||
| param( | ||
| [string]$Config | ||
| ) | ||
|
|
||
| $script = Join-Path $repoRoot "tools/native/build-native.sh" | ||
| $linuxScript = $script.Replace("\\", "/") | ||
| $linuxRoot = ([string]$repoRoot).Replace("\\", "/") | ||
| $quotedCommand = "bash '$linuxScript' '$linuxRoot/native/build-wsl' '$Config'" | ||
|
|
||
| & wsl.exe -e bash -lc $quotedCommand | ||
| if ((Get-LastExitCodeOrZero) -ne 0) { | ||
| throw "native build failed for WSL mode." | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the build-native.ps1 file
fd "build-native.ps1" --type fRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 98
🏁 Script executed:
# Also check for build-native.sh that this script references
fd "build-native.sh" --type fRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 97
🏁 Script executed:
# Search for WSL-related path handling patterns in the codebase
rg "wsl\.exe" --type ps1 -B 2 -A 5Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 96
🏁 Script executed:
# Check if there's documentation about WSL or Windows assumptions
fd "AGENTS.md|README.md" --type fRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 263
🏁 Script executed:
# Read the build-native.ps1 file to verify the code snippet
cat -n tools/native/build-native.ps1 | head -180Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 7898
🏁 Script executed:
# Read tools/AGENTS.md for documentation requirements
cat tools/AGENTS.mdRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 695
🏁 Script executed:
# Read tools/native/README.md for WSL-specific documentation
cat tools/native/README.mdRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 123
🏁 Script executed:
# Search for wsl.exe usage without type filter
rg "wsl\.exe" -B 2 -A 5Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 541
WSL path translation is incomplete — paths must be converted to WSL format, not just backslash-escaped.
Lines 166-167 replace backslashes with forward slashes but omit the drive letter translation (e.g., C:\repo\... becomes C:/repo/..., which is invalid inside WSL; it must be /mnt/c/repo/...). Use wsl.exe wslpath -u to perform correct path conversion:
function Invoke-WslBuild {
param(
[string]$Config
)
$script = Join-Path $repoRoot "tools/native/build-native.sh"
- $linuxScript = $script.Replace("\\", "/")
- $linuxRoot = ([string]$repoRoot).Replace("\\", "/")
+ $linuxScript = (& wsl.exe wslpath -u $script)
+ $linuxRoot = (& wsl.exe wslpath -u ([string]$repoRoot))
$quotedCommand = "bash '$linuxScript' '$linuxRoot/native/build-wsl' '$Config'"
& wsl.exe -e bash -lc $quotedCommandAdditionally, per coding guidelines, add explicit documentation to the script help or function comments stating that WSL mode assumes WSL is installed and configured, and paths require proper conversion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/native/build-native.ps1` around lines 160 - 174, The Invoke-WslBuild
function currently only replaces backslashes with slashes for $linuxScript and
$linuxRoot, which doesn't convert Windows drive letters to WSL paths; change the
flow to call wsl.exe wslpath -u to convert both $script and $repoRoot into
proper WSL paths (use the converted values instead of the simple Replace results
when building $quotedCommand), ensure you handle the wslpath call failures and
propagate errors to the existing exit-check, and add a brief function
comment/header stating that WSL mode requires WSL installed and that paths are
converted via wslpath -u (mentioning $linuxScript, $linuxRoot, and
$quotedCommand in the comment).
| if ([string]::IsNullOrWhiteSpace($OutputPath)) { | ||
| $outputDir = Join-Path $repoRoot "TestResults/research/$RunId" | ||
| New-Item -ItemType Directory -Path $outputDir -Force | Out-Null | ||
| $OutputPath = Join-Path $outputDir "fingerprint.json" |
There was a problem hiding this comment.
Output path TestResults/research/<runId>/ violates the mandated TestResults/runs/<runId>/ directory structure.
The coding guideline explicitly requires all per-run output to be isolated under TestResults/runs/<runId>/. This script writes to a different subtree, meaning the CI fingerprint-validation steps and validate-repro-bundle.ps1 that expect the standard path will not find research artifacts, and the isolation guarantee isn't uniformly enforced.
Either migrate to TestResults/runs/<runId>/research/ (keeping it within the mandated structure) or explicitly amend the coding guideline to carve out TestResults/research/ as a recognized second root.
As per coding guidelines: "Per-run output isolation must be maintained under TestResults/runs/<runId>/ directory structure".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/research/capture-binary-fingerprint.ps1` around lines 21 - 24, The
script sets $outputDir to "TestResults/research/$RunId" which violates the
required per-run isolation; change the default path logic so $outputDir is built
under the mandated TestResults/runs tree (e.g., Join-Path $repoRoot
"TestResults/runs/$RunId/research") and continue to create the directory and set
$OutputPath (using $OutputPath, $outputDir, $repoRoot, $RunId) so downstream
consumers like validate-repro-bundle.ps1 find artifacts in the expected
location.
| param( | ||
| [Parameter(Mandatory = $true)][string]$InputPath, | ||
| [string]$OutputPath | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Missing deterministic smoke command and explicit failure classification codes.
Two coding-guideline violations:
-
No smoke command: The script has no embedded usage example or fixture invocation in its header.
tools/fixtures/signature_pack_sample.jsonalready exists and should be referenced (e.g., as a# Example:comment or#Requiresblock) to satisfy the "deterministic fixture or smoke command" requirement. -
Failure classification codes absent: The only error path is
throw "Input signature pack not found: $InputPath"with no classification code. Use an explicit code (e.g.,[ERR_MISSING_INPUT]) consistent with how other tooling scripts in this PR classify failures.
♻️ Suggested header addition
param(
[Parameter(Mandatory = $true)][string]$InputPath,
[string]$OutputPath
)
+
+# Example (smoke):
+# pwsh ./tools/research/normalize-signature-pack.ps1 \
+# -InputPath tools/fixtures/signature_pack_sample.json \
+# -OutputPath /tmp/normalized_sample.jsonAnd for the failure throw:
- throw "Input signature pack not found: $InputPath"
+ throw "[ERR_MISSING_INPUT] Input signature pack not found: $InputPath"As per coding guidelines: "Every tooling script change must include a deterministic fixture or smoke command" and "Failure conditions must use explicit classification codes".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/research/normalize-signature-pack.ps1` around lines 1 - 4, Add a
deterministic smoke/fixture reference and a failure classification code: update
the script header (the param block) to include an Example/Requires comment that
references the existing fixture tools/fixtures/signature_pack_sample.json (e.g.,
add a "# Example: .\normalize-signature-pack.ps1 -InputPath
tools/fixtures/signature_pack_sample.json" or a `#Requires-style` note) so the
script has a deterministic smoke command, and change the error throw for the
missing input (the existing throw "Input signature pack not found: $InputPath")
to throw a classified message or use a custom error with an explicit code like
"[ERR_MISSING_INPUT]" so the failure path includes the classification code.
| generatedAtUtc = [string]$pack.generatedAtUtc | ||
| fingerprintId = [string]$pack.fingerprintId | ||
| defaultProfileId = [string]$pack.defaultProfileId | ||
| sourceProfilePath = [string]$pack.sourceProfilePath |
There was a problem hiding this comment.
null → "" coercion silently changes sourceProfilePath semantics.
In PowerShell, [string]$null evaluates to "". When the input pack has "sourceProfilePath": null, the output will serialize as "sourceProfilePath": "" instead of null. Any consumer that checks for absence via $null -eq $pack.sourceProfilePath will get $false — incorrect behaviour. All other fields in the block are required non-nullable, so only sourceProfilePath is affected.
🐛 Proposed fix
- sourceProfilePath = [string]$pack.sourceProfilePath
+ sourceProfilePath = $pack.sourceProfilePath📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sourceProfilePath = [string]$pack.sourceProfilePath | |
| sourceProfilePath = $pack.sourceProfilePath |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/research/normalize-signature-pack.ps1` at line 37, The current
assignment sourceProfilePath = [string]$pack.sourceProfilePath coerces $null to
"" and loses null semantics; change it to preserve null by checking
$pack.sourceProfilePath for $null and only casting to [string] when it's
non-null (set sourceProfilePath to $null if $pack.sourceProfilePath is $null);
refer to the variable name sourceProfilePath and the input
pack.sourceProfilePath to locate and update the assignment.
| if ($fingerprint.moduleList -isnot [System.Collections.IEnumerable]) { | ||
| Add-ValidationError -Errors $errors -Message "moduleList must be an array" | ||
| } |
There was a problem hiding this comment.
[System.Collections.IEnumerable] incorrectly accepts JSON string values as a valid moduleList.
In PowerShell, System.String implements IEnumerable<char>, so for a fingerprint JSON like { "moduleList": "some_string" }, the condition $fingerprint.moduleList -isnot [System.Collections.IEnumerable] evaluates to $false — no error is added, and the strict-mode check silently passes. The intent is clearly to require an array; use [array] instead.
🐛 Proposed fix
- if ($fingerprint.moduleList -isnot [System.Collections.IEnumerable]) {
+ if ($fingerprint.moduleList -isnot [array]) {
Add-ValidationError -Errors $errors -Message "moduleList must be an array"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ($fingerprint.moduleList -isnot [System.Collections.IEnumerable]) { | |
| Add-ValidationError -Errors $errors -Message "moduleList must be an array" | |
| } | |
| if ($fingerprint.moduleList -isnot [array]) { | |
| Add-ValidationError -Errors $errors -Message "moduleList must be an array" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/validate-binary-fingerprint.ps1` around lines 40 - 42, The check using
$fingerprint.moduleList -isnot [System.Collections.IEnumerable] incorrectly
treats strings as valid because System.String implements IEnumerable; replace
that type test with an explicit array check (e.g. use -isnot [array] on
$fingerprint.moduleList) so non-array JSON values (like strings) cause
Add-ValidationError to be invoked; update the validation near the
$fingerprint.moduleList check and keep the same Add-ValidationError call and
message.
Merge-Readiness Refresh (2026-02-19)
Affected Profiles
Authoritative Live Evidence
No merge performed in this pass; PR remains ready for reviewer approval + merge button. |
Add the native extender bridge/runtime path, backend routing and capability contracts, profile updates, and deterministic tooling/tests needed for strict ROE live validation. Includes Windows native build script hardening for VS18 generator discovery and host artifact verification. Co-authored-by: Codex <noreply@openai.com>
Align architecture, profile format, external tools setup, live validation runbook, and TODO tracking with the proven-core extender workflow and strict ROE gate acceptance criteria. Co-authored-by: Codex <noreply@openai.com>
Capture the single accepted live validation run for reconciliation acceptance. Includes strict non-blocked classification and extender route/hook evidence under runId 20260218-213838. Co-authored-by: Codex <noreply@openai.com>
eb697d3 to
c00bfec
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c00bfece8c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| var session = await _runtime.AttachAsync(effectiveProfileId); | ||
| if (variant is not null) | ||
| { | ||
| SelectedProfileId = effectiveProfileId; |
There was a problem hiding this comment.
Preserve universal_auto selection after resolving variant
When attach starts from universal_auto, this assignment permanently replaces the user’s selected profile with the resolved concrete variant. After detaching, subsequent attaches no longer re-run universal resolution, so switching to a different mod/process can reuse a stale profile and trigger profile mismatch/hard-fail behavior instead of auto-detecting again. Keeping the UI selection as universal_auto avoids regressing the auto-resolution workflow.
Useful? React with 👍 / 👎.
| ExecutionKind.Save => await ExecuteSaveActionAsync(request, cancellationToken), | ||
| ExecutionKind.CodePatch => await ExecuteCodePatchActionAsync(request, cancellationToken), | ||
| ExecutionKind.Freeze => new ActionExecutionResult(false, "Freeze actions must be handled by the orchestrator, not the runtime adapter.", AddressSource.None), | ||
| ExecutionKind.Sdk => await ExecuteSdkActionAsync(request, cancellationToken), |
There was a problem hiding this comment.
Honor non-extender backend decisions for SDK-tagged actions
This branch routes ExecutionKind.Sdk back into ExecuteSdkActionAsync even when the backend router has already selected the legacy memory path, so a profile forcing backendPreference: "memory"/"helper" for set_credits will still go through SDK gating and can fail with SDK-specific errors instead of executing via the chosen backend. In practice this makes backend preference overrides ineffective for SDK-tagged actions.
Useful? React with 👍 / 👎.
Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 020608e093
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| using var client = new NamedPipeClientStream( | ||
| serverName: ".", | ||
| pipeName: DefaultPipeName, |
There was a problem hiding this comment.
Read configured extender pipe name before connecting
NamedPipeExtenderBackend always connects to the hard-coded DefaultPipeName in SendCoreAsync, so runtime probe/execute calls fail whenever the bridge host is started with a non-default pipe (the host supports SWFOC_EXTENDER_PIPE_NAME override in BridgeHostMain.cpp). In those environments every extender-routed action degrades to timeout/unreachable behavior even though the host is healthy, so the client should resolve the pipe name from the same env var before creating NamedPipeClientStream.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tools/schemas/repro-bundle.schema.json (1)
25-25:⚠️ Potential issue | 🟡 Minor
schemaVersionshould be bumped — five newrequiredfields constitute a breaking schema change.Adding
selectedHostProcess,backendRouteDecision,capabilityProbeSnapshot,hookInstallReport, andoverlayStateto therequiredarray means any repro bundle without these fields (captured before this PR) will now fail validation against a schema still pinned at"const": "1.0". Bumping to"1.1"(or"2.0") and updating theconstin the validator would make the version boundary explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/schemas/repro-bundle.schema.json` at line 25, The schemaVersion const should be bumped to reflect the breaking change: update "schemaVersion" const from "1.0" to "1.1" (or "2.0") and update any validator code that checks the schemaVersion constant so it accepts the new version; this change is required because the newly added required properties selectedHostProcess, backendRouteDecision, capabilityProbeSnapshot, hookInstallReport, and overlayState make the schema incompatible with prior bundles, so ensure functions or modules that validate against the schemaVersion value are updated to the new "1.1" constant (or the chosen new version) to make the version boundary explicit.src/SwfocTrainer.App/ViewModels/MainViewModel.cs (1)
671-688:⚠️ Potential issue | 🟡 MinorStatus message doesn't reflect the
universal_autooverride.When
universal_autois present and wins (Lines 677–680), the status at Line 686 still showsrecommended: <old profile>even thoughSelectedProfileIdis now"universal_auto". This is a minor UX discrepancy that may confuse users.🐛 Proposed fix
+ var selectedForDisplay = SelectedProfileId ?? recommended; Status = !string.IsNullOrWhiteSpace(recommended) - ? $"Loaded {Profiles.Count} profiles (recommended: {recommended})" + ? $"Loaded {Profiles.Count} profiles (selected: {selectedForDisplay}, recommended: {recommended})" : $"Loaded {Profiles.Count} profiles";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 671 - 688, The status message can show a stale "recommended" value when UniversalProfileId wins; update the logic that sets Status to reflect the actual selected profile by using SelectedProfileId (or compute a displayRecommended variable) instead of the original recommended variable—i.e., after resolving resolvedProfileId/SelectedProfileId (and after the UniversalProfileId check), build the status string using SelectedProfileId when showing the chosen profile so it will show "recommended: universal_auto" when UniversalProfileId was selected.
🟠 Major comments (20)
TestResults/runs/20260218-213838/20260218-213838-live-hero-helper.trx-2-2 (1)
2-2:⚠️ Potential issue | 🟠 MajorDeveloper PII (username, machine name, and local absolute paths) committed to public repository.
The following attributes embed personal identifiable information that will be permanently stored in git history:
Attribute Value runUser(line 2)DESKTOP-A60S9GP\PrekzursilrunDeploymentRoot(line 5)Prekzursil_DESKTOP-A60S9GP_2026-02-18_23_39_54storage(line 16)full local path under c:\users\prekzursil\...codeBase(line 18)full local path under C:\Users\Prekzursil\...If this artifact (and similar TRX files in this PR) must be committed, consider either:
- Post-processing the TRX before committing to scrub machine-specific fields, or
- Adding
TestResults/runs/**/*.trxto.gitignoreand committing only the aggregatedlive-validation-summary.json(which presumably does not carry these fields).Also applies to: 5-5, 16-16, 18-18
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/20260218-213838-live-hero-helper.trx` at line 2, The TRX contains developer PII in attributes runUser, runDeploymentRoot, storage, and codeBase; remove or neutralize those values in TestResults/runs/20260218-213838/20260218-213838-live-hero-helper.trx (replace with generic placeholders or strip the attributes) and re-commit the sanitized TRX, and to prevent recurrence add an ignore rule for TRX artifacts (e.g. TestResults/runs/**/*.trx) to .gitignore and stop tracking existing TRX files (remove them from the index so they are not committed going forward).TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx-8-8 (1)
8-8:⚠️ Potential issue | 🟠 Major
startTime/endTimetimestamps are inconsistent withduration— live evidence artifact data integrity gap.
duration="00:00:34.2247037"(34.2 s) is correct and consistent with both the xUnitFinishedlog at00:00:34.42and the run-levelstart/finishwall clock (Lines 3, 35). However,startTimeandendTimediffer by only ~35 µs — essentially identical — because the xUnit VSTest adapter flushes per-test timestamps at result-collection time rather than at actual test start/end boundaries.This means the per-test timestamps cannot be used to cross-correlate against the live-process diagnostics cited in the PR (
selectedHostProcess,backendRouteDecision,capabilityProbeSnapshot,hookInstallReport). For an artifact that is the authoritative live-validation evidence, this reduces its forensic value. Consider enriching the test with explicit wall-clock markers logged toStdOut, or using a wrapper that captures OS-level start/end timestamps and injects them into the repro bundle rather than relying on VSTest's per-test timestamps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx` at line 8, The per-test startTime/endTime written by the VSTest adapter are not true wall-clock boundaries (they're flushed at result collection), so the test SwfocTrainer.Tests.Profiles.LiveTacticalToggleWorkflowTests.Tactical_Toggles_Should_Execute_And_Revert_When_Tactical_Mode must emit reliable OS-level timestamps: modify that test (or its test fixture/wrapper) to capture high-resolution DateTime.UtcNow (or Stopwatch ticks) at the actual test start and end and write them to StdOut (or into the test result payload) so they become part of the run artifact; alternatively implement a thin wrapper around the test execution that records OS timestamps and injects them into the repro bundle/trace rather than relying on VSTest’s startTime/endTime.TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx-2-2 (1)
2-2:⚠️ Potential issue | 🟠 MajorPII (username + hostname) committed to repository.
runUser="DESKTOP-A60S9GP\Prekzursil"embeds the developer's Windows username and machine hostname directly in a committed artifact. Combined with the same values incomputerName(Line 8) and thestorage/codeBasepaths (Lines 16–18), the full local identity is permanently written into Git history on a public repository.Consider scrubbing or templating these fields before committing (e.g. replace with
REDACTEDor a CI-neutral placeholder), or generate the TRX in a sanitised form via a post-processing script in the tooling pipeline.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx` at line 2, The committed TRX embeds local PII in the TestRun element (attributes runUser and computerName and paths in storage/codeBase); sanitize these fields by replacing the values with a neutral placeholder (e.g., REDACTED or CI_MACHINE) before committing or emitting the file, and add a post-processing sanitiser in your test pipeline that locates the TestRun element and overwrites runUser, computerName, storage and codeBase attributes (or removes them) so no local username/hostname or local paths are written to Git history.native/SwfocExtender.Overlay/include/swfoc_extender/overlay/OverlayState.hpp-11-11 (1)
11-11:⚠️ Potential issue | 🟠 Major
visible_is not thread-safe — usestd::atomic<bool>.In a game overlay,
setVisibleis typically driven by an input/hotkey thread whilevisible()is polled by the render thread. Accessing a plainboolacross threads without synchronization is undefined behaviour in C++. The fix is a one-line change and doesn't affect thenoexceptcontract.🔒 Proposed fix
+#include <atomic> + namespace swfoc::extender::overlay { class OverlayState { public: void setVisible(bool visible) noexcept; bool visible() const noexcept; private: - bool visible_ {false}; + std::atomic<bool> visible_ {false}; };With
std::atomic<bool>, the setter/getter implementations inOverlayState.cppremain correct as-is —operator=and implicitoperator booluse the defaultseq_cstordering, which is safe for a simple visibility flag. Ifrelaxedordering is acceptable for performance, usestore/loadexplicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Overlay/include/swfoc_extender/overlay/OverlayState.hpp` at line 11, The member visible_ in class OverlayState is accessed from different threads and must be made thread-safe: change its type from bool to std::atomic<bool> (add `#include` <atomic>) so reads/writes are atomic; no changes needed to the existing setVisible / visible methods in OverlayState.cpp since atomic<bool>'s default operations provide the required ordering (or you can use store/load with an explicit memory_order if you prefer).TestResults/runs/20260218-213838/launch-context-fixture.json-4-4 (1)
4-4:⚠️ Potential issue | 🟠 MajorDeveloper's local username and machine path committed to source control; path also has a double-escaping bug.
Two distinct issues on this line:
1. PII / path leakage (major): The
detailfield contains a hardcoded local filesystem path that embeds the developer's OS username:C:\\Users\\Prekzursil\\AppData\\Local\\.... Committing personal local paths to source control leaks filesystem/username information and becomes a permanent part of git history. The value should be redacted or replaced with a sanitised placeholder such as<python-executable>or a path relative to a well-known env var (e.g.,%LOCALAPPDATA%\\...).2. Malformed path from double-escaping (minor): After JSON string parsing,
\\becomes\and\\\\becomes\\. The segmentPrograms\\\\Python\\\\Python312\\\\python.exedecodes toPrograms\\Python\\Python312\\python.exe— literal double-backslashes — which is not a valid Windows path. The first half of the path uses correct single\\escaping while the second half uses\\\\. This is a classic PowerShellConvertTo-Jsonartefact when the input string was already escaped. The path serialisation should use a single level of JSON escaping throughout.🛡️ Proposed fix — redact PII and fix escaping
- "detail": "python produced no output. executable: C:\\Users\\Prekzursil\\AppData\\Local\\Programs\\\\Python\\\\Python312\\\\python.exe", + "detail": "python produced no output. executable: %LOCALAPPDATA%\\Programs\\Python\\Python312\\python.exe",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/launch-context-fixture.json` at line 4, The "detail" string in the JSON contains a developer's local path (PII) and a double-escaping artifact; update the serialization that produces the "detail" field so it emits a sanitized value (e.g., "<python-executable>" or a platform-env-based placeholder like "%LOCALAPPDATA%\\...") instead of the full C:\\Users\\... path, and ensure JSON string escaping is performed only once so backslashes are single-escaped (use the actual path string not an already-escaped string before calling JSON serialization). Target the code that sets the "detail" field (the producer that writes launch-context-fixture.json / the value assigned to "detail") and replace hardcoded local paths with a redaction/placeholder and fix the escaping pipeline to avoid double-escaped backslashes.native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp-20-31 (1)
20-31:⚠️ Potential issue | 🟠 Major
HookLifecycleManagerhas no thread-safety guarantees — concurrent access tohooks_would be undefined behaviour.
markInstalled,markFailed,markRolledBack, andgetall read/write a sharedstd::unordered_mapwith no synchronisation. If called from more than one thread, the result is a data race and UB.Add protection using
std::shared_mutex(shared lock forget, exclusive lock for the three mutators), or document explicitly that the class must be accessed from a single thread only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp` around lines 20 - 31, HookLifecycleManager currently accesses the shared unordered_map hooks_ from multiple methods without synchronization; add a std::shared_mutex member (e.g., mutex_) and protect access: use std::unique_lock<std::shared_mutex> in mutators markInstalled, markFailed, markRolledBack and use std::shared_lock<std::shared_mutex> in get to allow concurrent reads; also include the appropriate header(s) (<shared_mutex>) and ensure the lock objects are created at the start of each method to guard hooks_.tools/research/extract-pe-metadata.ps1-10-12 (2)
10-12:⚠️ Potential issue | 🟠 MajorFailure condition missing explicit classification code.
As per coding guidelines, "Failure conditions must use explicit classification codes." The bare
throw "Module not found: ..."gives no structured code for callers or CI pipelines to key on.🔧 Proposed fix
- throw "Module not found: $moduleFullPath" + throw "[MODULE_NOT_FOUND] Module not found: $moduleFullPath"As per coding guidelines: "Failure conditions must use explicit classification codes."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` around lines 10 - 12, Replace the bare throw in the module existence check with a failure that includes an explicit classification code so callers/CI can parse it: when Test-Path for $moduleFullPath fails, throw or write an error that prefixes a machine-readable code (e.g. ERR_MODULE_NOT_FOUND or E_MODULE_NOT_FOUND) along with the descriptive message and ensure the thrown object/message includes both the code and $moduleFullPath (update the throw/writing location around the Test-Path check referencing $moduleFullPath).
10-12:⚠️ Potential issue | 🟠 MajorFailure condition missing explicit classification code.
The
throwemits a plain prose string. As per coding guidelines, "Failure conditions must use explicit classification codes." Downstream scripts and CI pipelines cannot key on a structured code for routing or suppression.🔧 Proposed fix
- throw "Module not found: $moduleFullPath" + throw "[MODULE_NOT_FOUND] Module not found: $moduleFullPath"As per coding guidelines: "Failure conditions must use explicit classification codes."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` around lines 10 - 12, Replace the plain string throw with a structured failure that includes an explicit classification code; in the file's existence check where the script currently does throw "Module not found: $moduleFullPath", instead throw an exception whose message begins with a stable classification token (e.g. "MODULE_NOT_FOUND") or construct an ErrorRecord/Exception with a Code-like field, include $moduleFullPath in the message, and throw that object so downstream tooling can reliably parse the classification (locate the throw that references $moduleFullPath in extract-pe-metadata.ps1 and update it accordingly).tools/research/extract-pe-metadata.ps1-1-38 (2)
1-38:⚠️ Potential issue | 🟠 MajorNo deterministic fixture or smoke command included.
As per coding guidelines, "Every tooling script change must include a deterministic fixture or smoke command." The script ships with no inline example, no sibling fixture file, and no CI smoke step that invokes it against a known-good binary and asserts the output schema. Add a smoke invocation at minimum as a comment block, or a
tests/fixtures/smoke-extract-pe-metadata.ps1:# Smoke example — run from repo root against a committed fixture binary: # .\tools\research\extract-pe-metadata.ps1 ` # -ModulePath "tests/fixtures/sample.exe" ` # -OutputPath "TestResults/runs/smoke/sample.metadata.json" # $out = Get-Content "TestResults/runs/smoke/sample.metadata.json" | ConvertFrom-Json # if ($out.schemaVersion -ne "1.0") { throw "[SMOKE_SCHEMA_MISMATCH] schemaVersion unexpected" } # if (-not $out.fileVersion) { throw "[SMOKE_FIELD_EMPTY] fileVersion is empty" }As per coding guidelines: "Every tooling script change must include a deterministic fixture or smoke command."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 38, The script tools/research/extract-pe-metadata.ps1 lacks the required deterministic smoke/fixture; add a minimal reproducible smoke invocation and assertion either as a commented example at top of the script or as a separate tests/fixtures/smoke-extract-pe-metadata.ps1 that calls the script with -ModulePath and -OutputPath, reads the produced JSON and asserts result.schemaVersion == "1.0" and that result.fileVersion (or another required field) is non-empty; reference the param names ModulePath and OutputPath, the output file schemaVersion field, and the conversion step (ConvertFrom-Json) in the assertions so CI or a reviewer can run the smoke and validate the schema.
1-38:⚠️ Potential issue | 🟠 MajorNo deterministic fixture or smoke command included.
As per coding guidelines, every tooling script change must include a deterministic fixture or smoke command. Neither the script nor its diff adds an inline example invocation, a sibling fixture file, or a CI smoke step that validates the script's output against a known-good artifact.
A minimal approach is a smoke invocation comment block at the bottom, or a companion
tests/fixtures/extract-pe-metadata.smoke.ps1:# Smoke example (run from repo root, output verified against schema): # .\tools\research\extract-pe-metadata.ps1 ` # -ModulePath "tests/fixtures/sample.exe" ` # -OutputPath "TestResults/runs/smoke/sample.metadata.json" # $out = Get-Content "TestResults/runs/smoke/sample.metadata.json" | ConvertFrom-Json # if ($out.schemaVersion -ne "1.0") { throw "[SMOKE_FAIL] schemaVersion mismatch" }As per coding guidelines: "Every tooling script change must include a deterministic fixture or smoke command."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 38, Add a deterministic smoke invocation for the extract-pe-metadata.ps1 script: include a commented example block at the bottom of extract-pe-metadata.ps1 showing how to run Extract-PE-Metadata (ModulePath and OutputPath parameters), then read and validate the produced JSON (ConvertFrom-Json) and assert schemaVersion == "1.0" (throw on mismatch); alternatively add a small companion fixture file tests/fixtures/extract-pe-metadata.smoke.ps1 that performs the same deterministic run against a committed sample (e.g., tests/fixtures/sample.exe) and exits non-zero on failure so CI or reviewers can run the smoke test.tools/research/extract-pe-metadata.ps1-1-4 (2)
1-4:⚠️ Potential issue | 🟠 MajorMissing script help block and Windows-only assumption documentation.
[System.Diagnostics.FileVersionInfo]::GetVersionInfoonly yields meaningfulProductName,FileVersion,OriginalFilename, etc. for PE (Windows) binaries. Running this against a non-PE file silently emits null/empty fields with no error. As per coding guidelines, "Windows live target assumptions must be kept explicit in documentation and script help." Add at least a#Requirescomment or a comment-based help block with a.NOTESsection calling this out, and add[CmdletBinding()]to enable-Verbose.📝 Proposed minimal help block
+<# +.SYNOPSIS + Extracts PE (Portable Executable) metadata from a Windows binary and writes it as structured JSON. + +.DESCRIPTION + Reads FileVersionInfo and file-system properties from a Windows PE binary and emits a + schema-versioned JSON artifact. Only meaningful on PE/Windows targets; non-PE binaries + will yield empty version fields without error. + +.PARAMETER ModulePath + Path to the target PE module (e.g. StarWarsG.exe). + +.PARAMETER OutputPath + Optional. Destination JSON path. Defaults to <ModulePath-without-extension>.metadata.json. + +.NOTES + WINDOWS-ONLY ASSUMPTION: FileVersionInfo fields are PE-specific. Run on Windows hosts only. + Output isolation: pass an explicit -OutputPath rooted under TestResults/runs/<runId>/ for + per-run artifact tracking. +#> +[CmdletBinding()] param( [Parameter(Mandatory = $true)][string]$ModulePath, [string]$OutputPath )As per coding guidelines: "Windows live target assumptions must be kept explicit in documentation and script help."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 4, Add a comment-based help block and make the Windows-only assumption explicit, and enable -Verbose: add [CmdletBinding()] above the param(...) block to enable -Verbose, then add a comment-based help block (Synopsis, Description, Parameters for ModulePath and OutputPath, and a .NOTES section) that calls out that this script targets Windows PE files and that [System.Diagnostics.FileVersionInfo]::GetVersionInfo will return null/empty for non-PE files; alternatively include a top-level `#Requires` comment indicating Windows, and mention the ModulePath/OutputPath parameters and the reliance on [System.Diagnostics.FileVersionInfo]::GetVersionInfo in the .NOTES so callers know the Windows-only assumption.
1-4:⚠️ Potential issue | 🟠 MajorAdd a script help block documenting the Windows/PE-only assumption.
[System.Diagnostics.FileVersionInfo]::GetVersionInfofields (ProductName,FileVersion,OriginalFilename, etc.) are only meaningful for PE (Windows) binaries. A non-PE binary silently emits null/empty fields without error. There is also no[CmdletBinding()], so common parameters like-Verbose,-ErrorAction, and-WhatIfare unavailable. As per coding guidelines, "Windows live target assumptions must be kept explicit in documentation and script help."📝 Proposed help block + CmdletBinding
+<# +.SYNOPSIS + Extracts PE (Portable Executable) metadata from a Windows binary and writes it as structured JSON. + +.DESCRIPTION + Reads FileVersionInfo and file-system properties from a Windows PE binary and emits a + schema-versioned JSON artifact (schemaVersion 1.0). + NOTE: Windows PE/Windows-only assumption — FileVersionInfo fields are PE-specific. Non-PE + binaries will yield empty version fields without error. Must be run on a Windows host. + +.PARAMETER ModulePath + Path to the target PE module (e.g. StarWarsG.exe). + +.PARAMETER OutputPath + Optional. Destination JSON path. Defaults to <ModulePath-without-extension>.metadata.json. + For per-run artifact tracking, supply a path rooted under TestResults/runs/<runId>/. + +.NOTES + Output isolation: the default path places the artifact alongside the source binary. + Pass an explicit -OutputPath under TestResults/runs/<runId>/ when used in validation runs. +#> +[CmdletBinding()] param( [Parameter(Mandatory = $true)][string]$ModulePath, [string]$OutputPath )As per coding guidelines: "Windows live target assumptions must be kept explicit in documentation and script help."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 4, Add a PowerShell help block and CmdletBinding to the script: document that the script operates on Windows PE binaries only and that [System.Diagnostics.FileVersionInfo]::GetVersionInfo fields (ProductName, FileVersion, OriginalFilename, etc.) will be null/empty for non-PE files; include examples for the mandatory parameter ModulePath and optional OutputPath, and add [CmdletBinding()] at the top so common parameters like -Verbose, -ErrorAction, and -WhatIf are available; ensure the help block clearly states the Windows/PE-only assumption and expected behaviors when non-PE input is provided.tools/research/extract-pe-metadata.ps1-37-37 (2)
37-37:⚠️ Potential issue | 🟠 Major
Set-Content -Encoding UTF8writes a BOM on Windows PowerShell 5.x, breaking JSON consumers.The official Microsoft documentation confirms that in PowerShell 5.1,
UTF8encoding means "UTF-8 (with BOM)". Theutf8NoBOMvalue is not available in PS 5.1 — it was introduced in PowerShell 6. A BOM-prefixed JSON file can silently fail or be misparsed byjq,ConvertFrom-Jsonin some hosts, and any byte-level JSON parser that doesn't strip the BOM preamble. The safe cross-version fix is[System.IO.File]::WriteAllTextwith an explicit no-BOM encoding instance.🔧 Proposed fix
-($result | ConvertTo-Json -Depth 6) | Set-Content -Path $OutputPath -Encoding UTF8 +$json = $result | ConvertTo-Json -Depth 6 +[System.IO.File]::WriteAllText($OutputPath, $json, [System.Text.UTF8Encoding]::new($false))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` at line 37, The current write uses Set-Content with -Encoding UTF8 which produces a BOM on PowerShell 5.x; replace that write with a call to [System.IO.File]::WriteAllText using the JSON string produced by ConvertTo-Json ($result | ConvertTo-Json -Depth 6) and an explicit no-BOM encoding instance (e.g. New-Object System.Text.UTF8Encoding($false)) so the file at $OutputPath is written as UTF‑8 without a BOM; update the code that currently calls Set-Content to build the JSON string then call WriteAllText with the no‑BOM encoding.
37-37:⚠️ Potential issue | 🟠 Major
Set-Content -Encoding UTF8writes a UTF-8 BOM on Windows PowerShell 5.1, breaking JSON consumers downstream.Windows PowerShell 5.1 writes UTF-8 with BOM (bytes
EF BB BF) when using-Encoding UTF8, while PowerShell 7+ defaults to no BOM. JSON parsers (jq, ConvertFrom-Json, Node.js) can fail or misparse with a leading BOM. Use[System.IO.File]::WriteAllTextwith[System.Text.UTF8Encoding]::new($false)for cross-version correctness and JSON schema compliance.🔧 Proposed fix
-($result | ConvertTo-Json -Depth 6) | Set-Content -Path $OutputPath -Encoding UTF8 +$json = $result | ConvertTo-Json -Depth 6 +[System.IO.File]::WriteAllText($OutputPath, $json, [System.Text.UTF8Encoding]::new($false))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/extract-pe-metadata.ps1` at line 37, The current use of Set-Content with -Encoding UTF8 writes a BOM on Windows PowerShell 5.1; replace the Set-Content call so the JSON produced by ConvertTo-Json (using $result and $OutputPath) is written without a BOM by calling the .NET API to write the string (e.g., use [System.IO.File]::WriteAllText) and pass an explicit UTF8 encoding instance that disables the BOM (e.g., [System.Text.UTF8Encoding]::new($false)); update the line that references Set-Content to use this WriteAllText approach so downstream JSON consumers aren’t broken.src/SwfocTrainer.Profiles/Services/FileSystemProfileRepository.cs-154-158 (1)
154-158:⚠️ Potential issue | 🟠 Major
BackendPreference/HostPreferencefallback to parent is unreachable due to non-null defaults.As noted in
ProfileModels.cs, the record defaults ("auto","starwarsg_preferred") meanchild.BackendPreferenceandchild.HostPreferenceare never null/whitespace after deserialization, so lines 155 and 157 always select the child value. The parent's values can never propagate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Profiles/Services/FileSystemProfileRepository.cs` around lines 154 - 158, The fallback logic for BackendPreference and HostPreference in FileSystemProfileRepository.cs never uses the parent because child.BackendPreference and child.HostPreference are initialized to non-empty defaults; update the ternaries to treat those default sentinel values as "not set". Specifically, when constructing the merged profile, replace the checks string.IsNullOrWhiteSpace(child.BackendPreference) and string.IsNullOrWhiteSpace(child.HostPreference) with comparisons that detect the record defaults (e.g., child.BackendPreference == ProfileDefaults.BackendPreferenceDefault or child.BackendPreference == "auto", and child.HostPreference == ProfileDefaults.HostPreferenceDefault or child.HostPreference == "starwarsg_preferred") and fall back to parent.BackendPreference / parent.HostPreference when the child equals the sentinel default.src/SwfocTrainer.Core/Models/ProfileModels.cs-46-50 (1)
46-50:⚠️ Potential issue | 🟠 MajorDefault values for
BackendPreferenceandHostPreferenceprevent child profiles from inheriting parent values.The merge logic (lines 155–157 of
FileSystemProfileRepository) usesstring.IsNullOrWhiteSpace()to decide whether to use a child's value or fall back to the parent's. SinceBackendPreferencedefaults to"auto"andHostPreferencedefaults to"starwarsg_preferred", any child profile omitting these fields will be deserialized with the default non-empty strings. The merge will then treat these defaults as explicitly set values, making it impossible to inherit from the parent.While all current profiles explicitly set both fields (no inheritance is attempted), this is a fragile design that prevents inheritance semantics from working correctly. Consider either:
- Using
nullas the default and applying"auto"/"starwarsg_preferred"as runtime defaults downstream, or- Updating the merge logic to check if the child value equals the record default before deciding to fall back to the parent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Core/Models/ProfileModels.cs` around lines 46 - 50, The record defaults for BackendPreference ("auto") and HostPreference ("starwarsg_preferred") on ProfileModels.cs cause child profiles to appear explicitly set and block inheritance in FileSystemProfileRepository's merge logic (used around the merge method that checks string.IsNullOrWhiteSpace on the child). Fix by making those properties nullable defaults (BackendPreference = null, HostPreference = null) so omitted fields deserialize as null and inheritance works, and ensure any runtime use that needs "auto"/"starwarsg_preferred" applies those as runtime defaults; alternatively, if you prefer code-change in merging, update the merge in FileSystemProfileRepository to treat the record default values as "unset" when deciding to fall back to parent (i.e., check for null or equal to the record default before using child value).src/SwfocTrainer.Runtime/Services/RuntimeModeProbeResolver.cs-101-114 (1)
101-114:⚠️ Potential issue | 🟠 Major
CountSignalsuses a negative health-status filter that admits degraded symbols as valid signals.The condition skips a symbol only if its
HealthStatusisUnresolved.SymbolHealthStatushas three values:Healthy,Degraded, andUnresolved. The current filter allowsDegradedsymbols to be counted, but degraded symbols should not contribute to mode determination. The stated intent is to count only symbols with a healthy status. Use a positive guard instead:🛡️ Proposed fix
-if (info.Address == nint.Zero || info.HealthStatus == SymbolHealthStatus.Unresolved) +if (info.Address == nint.Zero || info.HealthStatus != SymbolHealthStatus.Healthy)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Runtime/Services/RuntimeModeProbeResolver.cs` around lines 101 - 114, CountSignals currently treats any symbol that is not Unresolved as valid, which incorrectly counts Degraded symbols; update the guard in CountSignals to only accept symbols whose info.HealthStatus is SymbolHealthStatus.Healthy (and still ensure info.Address != nint.Zero and info != null). Locate the method CountSignals and replace the negative health check against Unresolved with a positive check for SymbolHealthStatus.Healthy so only healthy symbols contribute to the count.src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs-225-258 (1)
225-258:⚠️ Potential issue | 🟠 Major
Thread.Sleep(200)insidelock(HostSync)blocks the thread pool and holds the lock.Two compounding problems:
- Lock held for 200 ms: any concurrent caller racing to
TryStartBridgeHostProcessstalls on lock acquisition for the entire sleep duration, not just for the process-start check.- Async context:
TryStartBridgeHostProcessis reached from anasyncmethod (SendAsync). The.Sleeppins the current thread pool thread instead of yielding, risking pool starvation under sustained concurrent usage.Minimum fix — move the wait outside the lock so other callers can read
HasExitedimmediately:🐛 Proposed fix (minimum — move sleep outside lock)
- try - { - _bridgeHostProcess = Process.Start(new ProcessStartInfo - { - FileName = hostPath, - WorkingDirectory = Path.GetDirectoryName(hostPath) ?? AppContext.BaseDirectory, - UseShellExecute = false, - CreateNoWindow = true - }); - Thread.Sleep(200); - return _bridgeHostProcess is not null && !_bridgeHostProcess.HasExited; - } + Process? started = null; + try + { + _bridgeHostProcess = Process.Start(new ProcessStartInfo + { + FileName = hostPath, + WorkingDirectory = Path.GetDirectoryName(hostPath) ?? AppContext.BaseDirectory, + UseShellExecute = false, + CreateNoWindow = true + }); + started = _bridgeHostProcess; + } catch { _bridgeHostProcess = null; return false; } + } + // Wait for process startup *outside* the lock so concurrent callers are not blocked. + if (started is not null) + { + Thread.Sleep(200); } + return started is not null && !started.HasExited; }Preferred fix: make
TryStartBridgeHostProcessasync, replaceThread.Sleepwithawait Task.Delay(200), and replace thelockwith aSemaphoreSlim.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs` around lines 225 - 258, TryStartBridgeHostProcess currently holds lock(HostSync) while calling Thread.Sleep(200), which blocks thread-pool threads and prevents concurrent callers (e.g., SendAsync) from observing _bridgeHostProcess.HasExited; change the method to an async implementation: replace the lock(HostSync) with a SemaphoreSlim (e.g., _hostSemaphore) awaited with WaitAsync/Release, replace Thread.Sleep(200) with await Task.Delay(200) (moved outside the critical section if you keep the lock approach), and ensure all call sites (including SendAsync) await the async TryStartBridgeHostProcess; ensure _bridgeHostProcess is still checked/assigned inside the semaphore-protected section and the post-delay HasExited check happens after the delay without holding the semaphore longer than necessary.tools/research/run-capability-intel.ps1-16-18 (1)
16-18:⚠️ Potential issue | 🟠 MajorOrchestrator output root
TestResults/research/<RunId>/violates the per-run isolation mandate.This is the top-level orchestrator for the capability intel workflow and sets the
$OutputRootthat flows into all three sub-scripts. Fixing it here corrects the path for all artifacts produced by the pipeline (fingerprint.json,signature-pack.json,analysis-notes.md) in a single change.🐛 Proposed fix
if ([string]::IsNullOrWhiteSpace($OutputRoot)) { - $OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId" + $OutputRoot = Join-Path $repoRoot "TestResults/runs/$RunId/research" }As per coding guidelines: "Per-run output isolation must be maintained under
TestResults/runs/<runId>/directory structure".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/run-capability-intel.ps1` around lines 16 - 18, The current orchestration sets $OutputRoot to TestResults/research/$RunId which violates per-run isolation; change the Join-Path call that assigns $OutputRoot (when [string]::IsNullOrWhiteSpace($OutputRoot)) to use the mandated structure "TestResults/runs/<RunId>" (e.g., Join-Path $repoRoot "TestResults/runs/$RunId") so all downstream artifacts (fingerprint.json, signature-pack.json, analysis-notes.md) produced via $OutputRoot are placed under the per-run isolated directory; ensure $RunId is used exactly and no other references to "research" remain in this assignment.tools/research/generate-signature-candidates.ps1-29-32 (1)
29-32:⚠️ Potential issue | 🟠 MajorDefault output path
TestResults/research/<RunId>/violates the per-run isolation mandate.The coding guideline requires all per-run output under
TestResults/runs/<runId>/. This script (andrun-capability-intel.ps1) write to a parallelTestResults/research/subtree, meaning downstream consumers that expect artifacts under the canonical path won't find them.🐛 Proposed fix
if ([string]::IsNullOrWhiteSpace($OutputPath)) { - $outputDir = Join-Path $repoRoot "TestResults/research/$RunId" + $outputDir = Join-Path $repoRoot "TestResults/runs/$RunId/research" New-Item -ItemType Directory -Path $outputDir -Force | Out-Null $OutputPath = Join-Path $outputDir "signature-pack.json" } else {As per coding guidelines: "Per-run output isolation must be maintained under
TestResults/runs/<runId>/directory structure".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/generate-signature-candidates.ps1` around lines 29 - 32, The script currently builds a default per-run output directory using TestResults/research/$RunId which violates the per-run isolation mandate; update the path construction so $outputDir uses "TestResults/runs/$RunId" instead of "TestResults/research/$RunId" (i.e., change the Join-Path call that combines $repoRoot and "TestResults/research/$RunId" to use "TestResults/runs/$RunId") and leave the subsequent New-Item and $OutputPath assignment (Join-Path for "signature-pack.json") unchanged so artifacts are created under the canonical per-run tree; ensure references to $OutputPath/$outputDir in this script (and mirror the same change in run-capability-intel.ps1) are consistent.
| private static readonly IReadOnlyDictionary<string, SdkOperationDefinition> Operations = | ||
| new Dictionary<string, SdkOperationDefinition>(StringComparer.OrdinalIgnoreCase) | ||
| { | ||
| ["list_selected"] = SdkOperationDefinition.ReadOnly("list_selected"), | ||
| ["list_nearby"] = SdkOperationDefinition.ReadOnly("list_nearby"), | ||
| ["spawn"] = SdkOperationDefinition.Mutation("spawn", RuntimeMode.Tactical, RuntimeMode.Galactic), | ||
| ["kill"] = SdkOperationDefinition.Mutation("kill", RuntimeMode.Tactical), | ||
| ["set_owner"] = SdkOperationDefinition.Mutation("set_owner", RuntimeMode.Tactical, RuntimeMode.Galactic), | ||
| ["teleport"] = SdkOperationDefinition.Mutation("teleport", RuntimeMode.Tactical), | ||
| ["set_planet_owner"] = SdkOperationDefinition.Mutation("set_planet_owner", RuntimeMode.Galactic), | ||
| ["set_hp"] = SdkOperationDefinition.Mutation("set_hp", RuntimeMode.Tactical), | ||
| ["set_shield"] = SdkOperationDefinition.Mutation("set_shield", RuntimeMode.Tactical), | ||
| ["set_cooldown"] = SdkOperationDefinition.Mutation("set_cooldown", RuntimeMode.Tactical) | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's find the SdkOperationRouter implementation
echo "=== Searching for SdkOperationRouter ==="
rg -n --type=cs 'SdkOperationRouter' | head -20
echo -e "\n=== Searching for set_credits usage ==="
rg -n --type=cs 'set_credits' -C 3
echo -e "\n=== Searching for TryGet calls on catalog ==="
rg -n --type=cs 'TryGet\s*\(' -C 3 | head -50Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 36824
🏁 Script executed:
#!/bin/bash
# Let's find the file containing SdkOperationRouter
echo "=== Finding SdkOperationRouter class ==="
fd --type=f --name='*.cs' | xargs grep -l 'class SdkOperationRouter'
echo -e "\n=== Finding files related to Sdk operations ==="
fd --type=f --name='*.cs' | xargs grep -l 'SdkOperationCatalog\|SdkOperationRouter' | head -10Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 677
🏁 Script executed:
#!/bin/bash
echo "=== SdkOperationRouter.cs full context ==="
cat -n src/SwfocTrainer.Core/Services/SdkOperationRouter.cs | head -100
echo -e "\n=== Check if set_credits bypasses SdkOperationRouter ==="
rg -n --type=cs 'set_credits' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 5 -B 5 | head -60Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 6009
🏁 Script executed:
#!/bin/bash
echo "=== Check ExecuteAsync path in RuntimeAdapter ==="
rg -n --type=cs 'ExecuteAsync' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs | head -20
echo -e "\n=== Check how SDK operations are routed ==="
rg -n --type=cs 'ExecutionKind.Sdk' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -B 3 -A 3 | head -50Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 1105
🏁 Script executed:
#!/bin/bash
echo "=== Check if ExecuteSdkActionAsync has special handling for set_credits ==="
rg -n --type=cs 'ExecuteSdkActionAsync' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 30
echo -e "\n=== Verify the test expectations for set_credits with Sdk ==="
rg -n --type=cs 'ExecutionKind.Sdk' tests/ | grep -i 'set_credits'Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 3545
🏁 Script executed:
#!/bin/bash
echo "=== Full ExecuteSdkActionAsync implementation ==="
rg -n --type=cs 'private async Task<ActionExecutionResult> ExecuteSdkActionAsync' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 50
echo -e "\n=== Check if there is an AllowUnknownMode property in SdkOperationDefinition ==="
rg -n --type=cs 'class SdkOperationDefinition\|record SdkOperationDefinition' src/SwfocTrainer.Core/Models/ -A 20Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 2262
🏁 Script executed:
#!/bin/bash
echo "=== Find SdkOperationDefinition ==="
find . -name "*.cs" -type f | xargs grep -l "SdkOperationDefinition" | head -5
echo -e "\n=== Search for SdkOperationDefinition definition ==="
rg -n --type=cs 'record SdkOperationDefinition|class SdkOperationDefinition' -A 15
echo -e "\n=== Check IsMutatingSdkOperation for set_credits ==="
rg -n --type=cs 'IsMutatingSdkOperation' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 10Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 2594
set_credits is missing from the SDK operation catalog and will fail at runtime.
The execution path for set_credits with ExecutionKind.Sdk routes through SdkOperationRouter.ExecuteAsync, which calls SdkOperationCatalog.TryGet("set_credits") at line 60. Since set_credits is absent from the catalog (lines 8–21), this lookup returns false, causing the router to immediately reject the operation with "not part of the v1 catalog." No fallback or bypass exists.
Add set_credits to the catalog as a mutation operation for RuntimeMode.Galactic:
Suggested catalog entry
["set_credits"] = SdkOperationDefinition.Mutation("set_credits", RuntimeMode.Galactic)
This is required for the ROE profile's SDK-routed execution to function, as confirmed by ProfileActionCatalogTests and LiveRoeRuntimeHealthTests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/SwfocTrainer.Core/Models/SdkOperationCatalog.cs` around lines 8 - 21, The
Operations dictionary in SdkOperationCatalog is missing the "set_credits" entry
which causes SdkOperationRouter.ExecuteAsync to fail when it calls
TryGet("set_credits"); add an entry to the static Operations dictionary using
SdkOperationDefinition.Mutation for "set_credits" with RuntimeMode.Galactic
(i.e., mirror the other entries like "set_planet_owner" but using the key
"set_credits" and Mutation("set_credits", RuntimeMode.Galactic)) so TryGet
succeeds and the SDK-routed execution path can proceed.
| $snapshot = New-Object System.Collections.Generic.List[object] | ||
| foreach ($proc in $processes) { | ||
| $steamIds = Parse-SteamModIds -CommandLine $proc.CommandLine | ||
| $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\\.exe|sweaw\\.exe)$") { "launcher" } else { "unknown" } |
There was a problem hiding this comment.
Bug: double-backslash regex never matches swfoc.exe / sweaw.exe — all non-StarWarsG processes get hostRole = "unknown".
In PowerShell, backslash is not a string escape character, so "^(swfoc\\.exe|sweaw\\.exe)$" produces the literal regex ^(swfoc\\.exe|sweaw\\.exe)$. In .NET regex, \\ matches a literal backslash character, so this pattern requires a \ in the process name (e.g., swfoc\.exe). Actual process names like swfoc.exe will never match, causing the elseif branch to be dead code and those processes to always get "unknown" as hostRole.
Compare with line 56 which correctly uses single backslash: "^(swfoc\.exe|StarWarsG\.exe|sweaw\.exe)$".
🐛 Fix: use single backslash to escape the dot in regex
- $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\\.exe|sweaw\\.exe)$") { "launcher" } else { "unknown" }
+ $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\.exe|sweaw\.exe)$") { "launcher" } else { "unknown" }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\\.exe|sweaw\\.exe)$") { "launcher" } else { "unknown" } | |
| $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\.exe|sweaw\.exe)$") { "launcher" } else { "unknown" } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/collect-mod-repro-bundle.ps1` at line 62, The regex in the $hostRole
assignment uses double backslashes ("^(swfoc\\.exe|sweaw\\.exe)$") so it looks
for a literal backslash before ".exe" and never matches real process names;
update the pattern used with $proc.Name (or replace the whole elseif condition)
to escape the dot correctly (e.g., use "\." or "[.]" in the regex) so names like
"swfoc.exe" and "sweaw.exe" match and the elseif branch can set hostRole to
"launcher".
- complete overload, complexity, markdown, and PowerShell cleanup for PR49 blockers - align repro bundle tooling/schema to v1.1 and update fixture/contracts - sanitize committed live evidence artifacts and add remediation checklist traceability Co-authored-by: Codex <noreply@openai.com>
- replace parameter-heavy diagnostics helper signatures in BackendRouter and CapabilityMapResolver - initialize capability map DTO properties to satisfy Sonar unassigned-property checks Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1f6d32687d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| bool isMutating, | ||
| Dictionary<string, object?> diagnostics) | ||
| { | ||
| if (capabilityReport.IsFeatureAvailable(request.Action.Id)) |
There was a problem hiding this comment.
Honor capability confidence before allowing mutating extender routes
This branch allows extender routing whenever IsFeatureAvailable is true, but mutating operations can still be in an unproven state (Experimental) at probe time (for example, the new economy probe advertises available=true before hook installation is verified). In that scenario writes like set_credits are permitted instead of fail-closing on uncertain capability confidence, which weakens the safety gate for mutation paths.
Useful? React with 👍 / 👎.
| [ordered]@{ | ||
| backend = "extender" | ||
| probeReasonCode = if ([string]::IsNullOrWhiteSpace([string]$runtimeEvidence.result.capabilityProbeReasonCode)) { "CAPABILITY_UNKNOWN" } else { [string]$runtimeEvidence.result.capabilityProbeReasonCode } | ||
| capabilityCount = 1 |
There was a problem hiding this comment.
Populate capabilityCount from evidence instead of constant
When runtime evidence is present, the repro bundle always records capabilityCount = 1 instead of deriving the value from the probe output. Any run where the probe returns zero capabilities (or more than one) will emit incorrect diagnostics, which can skew hard-gate triage and make the attached evidence bundle factually inconsistent with runtime behavior.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs (1)
341-378:⚠️ Potential issue | 🟡 MinorAdd a deterministic tie-breaker for process selection.
When scores and tie-breakers are equal, ordering falls back to enumeration order, which can vary and lead to non-deterministic attach targets. Add a final stable tie-breaker (e.g., ProcessId) so selection is deterministic.
As per coding guidelines, "Ensure attach host selection is deterministic when both swfoc.exe and StarWarsG.exe exist for FoC".🔧 Suggested fix
.OrderByDescending(x => x.SelectionScore) .ThenByDescending(x => x.WorkshopMatchCount) .ThenByDescending(x => x.RecommendationMatch) .ThenByDescending(x => x.MainModuleSize) .ThenByDescending(x => x.HasCommandLine) + .ThenByDescending(x => x.Process.ProcessId) .ToArray();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs` around lines 341 - 378, The selection ordering for ranked candidates can be non-deterministic when scores and all tie-breakers match; update the final ordering chain after SelectionScore to include a stable tie-breaker using the process identifier (e.g., x.Process.ProcessId) so results are deterministic. Modify the LINQ sort after OrderByDescending(x => x.SelectionScore) / ThenByDescending chains to append a final ThenBy (or ThenByDescending as appropriate) on x.Process.ProcessId (use a consistent string comparison like Ordinal or OrdinalIgnoreCase) so identical scores always resolve in a stable order.tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs (1)
26-68:⚠️ Potential issue | 🟡 MinorEnsure live evidence is produced or explicitly skip when output dir is unset.
Right now the test can run without emitting evidence when
SWFOC_LIVE_OUTPUT_DIRis missing, which makes the live run non-deterministic and unverifiable. Consider skipping when the output directory is not configured, or writing to a default deterministic location.As per coding guidelines, "Runtime/tooling/test changes must include deterministic test evidence or an explicit justified skip".🔧 Suggested fix (skip when output dir is missing)
var roeCandidates = await FindRoeCandidatesAsync(locator); EnsureRoeCandidatesOrSkip(roeCandidates); + if (!TryGetLiveOutputDirectory(out _)) + { + throw LiveSkip.For(_output, "SWFOC_LIVE_OUTPUT_DIR is not set; live evidence output required."); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs` around lines 26 - 68, The test Roe_Attach_And_Credits_Action_Should_Succeed_On_Live_Process can run without producing evidence; before doing any live actions check the configured output directory (e.g. Environment.GetEnvironmentVariable("SWFOC_LIVE_OUTPUT_DIR") or whatever config the test suite uses) and explicitly skip the test if it's unset/empty (use Skip.If or the existing Ensure... helper pattern), or alternatively set a deterministic default output path; ensure TryWriteRuntimeEvidence is only called when the output dir is present so evidence is always produced or the test is skipped.src/SwfocTrainer.App/ViewModels/MainViewModel.cs (1)
671-688:⚠️ Potential issue | 🟡 MinorStatus message misleads when
universal_autooverrides the recommendation.When
universal_autois present (lines 677–681),SelectedProfileIdis set touniversal_auto, but theStatusat lines 685–686 still announcesrecommended: {recommended}(e.g.aotr_xyz). A user reading the status will think the named mod profile is selected, butuniversal_autois actually active. Either omit the recommendation hint whenuniversal_autowins, or surface both:💡 Suggested status update
Status = !string.IsNullOrWhiteSpace(recommended) - ? $"Loaded {Profiles.Count} profiles (recommended: {recommended})" + ? Profiles.Contains(UniversalProfileId) + ? $"Loaded {Profiles.Count} profiles (universal_auto selected; detected: {recommended})" + : $"Loaded {Profiles.Count} profiles (recommended: {recommended})" : $"Loaded {Profiles.Count} profiles";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 671 - 688, The status string can be misleading because when UniversalProfileId wins (Profiles.Contains(UniversalProfileId)) SelectedProfileId is set to UniversalProfileId but Status still shows the original recommended value; update the logic that sets Status (after SelectedProfileId is resolved) to reflect the actual chosen profile—either omit the "recommended: {recommended}" part when SelectedProfileId == UniversalProfileId, or include both pieces (e.g., "recommended: {recommended} (overridden by universal_auto)") so the UI accurately reflects SelectedProfileId, using the existing symbols Profiles, recommended, UniversalProfileId, SelectedProfileId and Status to locate and change the code.
🟡 Minor comments (20)
TestResults/runs/20260218-213838/issue-19-evidence-template.md-4-4 (1)
4-4:⚠️ Potential issue | 🟡 MinorScope label is narrower than the table coverage, and the TRX artifacts are missing from the Artifacts section.
Two distinct points:
Line 4 declares
scope: ROE, but the evidence table on lines 9-10 covers both theaotr_1397421866_swfocandroe_3447786229_swfocprofiles. The scope should reflect both, e.g.scope: AOTR + ROE.The AI-generated summary notes that run
20260218-213838produced four TRX artifacts (live-credits.trx,live-hero-helper.trx,live-roe-health.trx,live-tactical.trx) andlive-roe-runtime-evidence.json, none of which are listed in the Artifacts section. Per the project's evidence conventions, the TRX files are the primary machine-readable test output and should be referenced here for traceability.📝 Suggested additions
- - scope: ROE + - scope: AOTR + ROE Artifacts: - TestResults\runs\20260218-213838\live-validation-summary.json - TestResults\runs\20260218-213838\launch-context-fixture.json - TestResults\runs\20260218-213838\repro-bundle.json - TestResults\runs\20260218-213838\repro-bundle.md + - TestResults\runs\20260218-213838\live-tactical.trx + - TestResults\runs\20260218-213838\live-hero-helper.trx + - TestResults\runs\20260218-213838\live-roe-health.trx + - TestResults\runs\20260218-213838\live-credits.trx + - TestResults\runs\20260218-213838\live-roe-runtime-evidence.jsonAlso applies to: 19-24
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/issue-19-evidence-template.md` at line 4, Update the front-matter and Artifacts section: change the scope value currently set as "scope: ROE" to include both profiles (e.g., "scope: AOTR + ROE") because the evidence table references both aotr_1397421866_swfoc and roe_3447786229_swfoc, and add the missing machine-readable outputs to the Artifacts list by including live-credits.trx, live-hero-helper.trx, live-roe-health.trx, live-tactical.trx and live-roe-runtime-evidence.json so the TRX files and the JSON runtime evidence are referenced for traceability.TestResults/runs/20260218-213838/issue-19-evidence-template.md-14-17 (1)
14-17:⚠️ Potential issue | 🟡 MinorEmpty parentheses
()after each test suite name appear to be an unfilled token.Lines 14-17 list all four suites as
Passed (). The trailing()is likely a placeholder for a test count or elapsed-time value that was never substituted. If these were autogenerated, verify that the template interpolation ran correctly; otherwise fill in the actual counts or remove the empty parens.📝 Suggested fix
- - LiveTacticalToggleWorkflowTests: Passed () - - LiveHeroHelperWorkflowTests: Passed () - - LiveRoeRuntimeHealthTests: Passed () - - LiveCreditsTests: Passed () + - LiveTacticalToggleWorkflowTests: Passed (3/3) + - LiveHeroHelperWorkflowTests: Passed (2/2) + - LiveRoeRuntimeHealthTests: Passed (4/4) + - LiveCreditsTests: Passed (1/1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/issue-19-evidence-template.md` around lines 14 - 17, The four test-suite lines show an unfilled token "()" — update the evidence-generation/template logic so those placeholders are either populated with the correct value or removed; specifically, fix the template or interpolation that renders the entries for LiveTacticalToggleWorkflowTests, LiveHeroHelperWorkflowTests, LiveRoeRuntimeHealthTests and LiveCreditsTests so they output a test count or elapsed time (e.g., "Passed (N)" or "Passed (0s)") or simply "Passed" when no value is available. Locate the rendering code that emits those suite lines and either supply the missing variable or change the format string to omit the parentheses when the value is empty.native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp-151-172 (1)
151-172:⚠️ Potential issue | 🟡 MinorMessage-mode pipe: large messages (>16 KB) will be silently truncated.
With
PIPE_READMODE_MESSAGE, if the message exceedskPipeBufferSize,ReadFilereturnsFALSEwithERROR_MORE_DATA. Thewhileloop exits, and the incomplete payload is processed. Given the expected command sizes this is unlikely to trigger, but consider checkingGetLastError() == ERROR_MORE_DATAand either reading the remainder or returning an explicit error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 151 - 172, In ReadCommandLine, handle the message-mode pipe truncation case: after a ReadFile that returns FALSE, call GetLastError() and if it equals ERROR_MORE_DATA, either loop to read remaining chunks until the full message is consumed (appending to commandLine) or fail-fast by returning an explicit error/empty string to signal truncation; update logic around the ReadFile loop and refer to the ReadCommandLine function and kPipeBufferSize buffer to ensure you handle successive ReadFile calls when ERROR_MORE_DATA is reported rather than silently processing a partial command.native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp-31-86 (1)
31-86:⚠️ Potential issue | 🟡 MinorHand-rolled JSON extractors don't handle escaped characters or braces inside string values.
ExtractStringValuescans for the next"after the opening quote, so a value containing an escaped quote (\") would be truncated. Similarly,ExtractObjectJsoncounts{/}without skipping those inside string literals, so a string value containing a brace would corrupt the extraction.This is acceptable if the command payloads are entirely controlled and guaranteed to never contain such characters, but it will silently produce incorrect results for any unexpected input. Consider documenting this limitation, or switching to a lightweight JSON library (e.g., nlohmann/json or simdjson) if payload complexity grows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 31 - 86, Replace the fragile hand-rolled parsers (ExtractStringValue and ExtractObjectJson) with a real JSON parse-and-extract using a JSON library (e.g., nlohmann::json): parse the incoming json string into a json object once inside both functions (or a shared helper), then safely lookup the key and return either the string value for ExtractStringValue or the object dump for ExtractObjectJson; wrap parsing/lookup in try/catch and return {} or "{}" on failure to preserve current behavior. Ensure you reference and update the functions named ExtractStringValue and ExtractObjectJson and add the required include and error-handling so escaped quotes and braces inside string values are handled correctly.native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp-14-18 (1)
14-18:⚠️ Potential issue | 🟡 Minor
CapabilityState::Experimentaland::Verifiedare unreachable via the public API.
CapabilityEntry.stateis typed asCapabilityStatewith three meaningful values (Unknown,Experimental,Verified), yetmarkAvailabletakes noCapabilityStateparameter. The implementation must therefore hard-code the resulting state (likelyVerified), leavingExperimentalpermanently inaccessible, or inadvertently leavestateasUnknownon anavailable = trueentry — which would be a logic contradiction.Either:
- Add a
CapabilityStateparameter tomarkAvailable, or- Add a dedicated
markExperimental(const std::string& featureId, ...)overload.♻️ Option A — extend markAvailable signature
- void markAvailable(const std::string& featureId, const std::string& reasonCode = "CAPABILITY_PROBE_PASS"); + void markAvailable(const std::string& featureId, + CapabilityState state = CapabilityState::Verified, + const std::string& reasonCode = "CAPABILITY_PROBE_PASS");Also applies to: 30-30
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp` around lines 14 - 18, CapabilityState defines Unknown, Experimental, and Verified but the public API only exposes markAvailable with no state parameter, leaving Experimental unreachable or causing inconsistency between CapabilityEntry.state and available; update the API to either add a CapabilityState parameter to markAvailable (e.g., markAvailable(const std::string& featureId, CapabilityState state, ...)) and propagate that value into CapabilityEntry.state, or add a separate markExperimental(const std::string& featureId, ...) overload that sets CapabilityEntry.state = CapabilityState::Experimental, ensuring callers can explicitly choose Experimental vs Verified and that markAvailable retains/sets the correct Verified behavior; adjust all call sites and any validation logic that assumes state values accordingly (referencing CapabilityState, CapabilityEntry.state, markAvailable, and markExperimental).tools/collect-mod-repro-bundle.ps1-513-518 (1)
513-518:⚠️ Potential issue | 🟡 Minor
$overlayStateis a hardcoded stub — values never change even when$runtimeEvidenceis present.
availableandvisibleare always$falseandreasonCodeis always"CAPABILITY_BACKEND_UNAVAILABLE", even in the branch where$runtimeEvidenceis non-null (thesourcefield is the only dynamic part). This means the repro bundle and its Markdown summary always report the overlay as unavailable, masking any live overlay data that may be in the evidence.If this is intentional scaffolding for a future capability, add an inline comment (e.g.,
# TODO: populate from runtimeEvidence.result.overlayState once exposed) so readers know it is a known placeholder rather than a silent data gap.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/collect-mod-repro-bundle.ps1` around lines 513 - 518, The $overlayState object is a hardcoded stub so available/visible/reasonCode never reflect runtime data; update the assignment to populate these fields from runtimeEvidence (e.g., use runtimeEvidence.result.overlayState.available, .visible, .reasonCode when $runtimeEvidence is non-null) and fall back to the existing defaults otherwise, or if this is intentional scaffolding add an explicit inline TODO comment (e.g., "# TODO: populate from runtimeEvidence.result.overlayState once exposed") next to the $overlayState declaration so readers know it is a placeholder; refer to $overlayState and $runtimeEvidence to locate and change the logic.tools/collect-mod-repro-bundle.ps1-477-494 (1)
477-494:⚠️ Potential issue | 🟡 Minor
capabilityCount = 1is hardcoded regardless of actual probed capabilities.When
$runtimeEvidenceis present,capabilityCountis fixed at1rather than derived from evidence data or from the actual size of$requiredCapabilities. If the profile ever lists zero or more than one required capability, the bundle will misreport the probe count, making the snapshot misleading for diagnostics.♻️ Proposed fix
- capabilityCount = 1 + capabilityCount = @($requiredCapabilities).Count🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/collect-mod-repro-bundle.ps1` around lines 477 - 494, The capabilityCount field in capabilityProbeSnapshot is hardcoded to 1; change it to compute the real count instead—when $runtimeEvidence is present set capabilityCount = ($requiredCapabilities | Measure-Object).Count (or if runtimeEvidence contains a capabilities array use that array's length), and keep the else branch as 0; update the capabilityProbeSnapshot construction (the block that defines backend/probeReasonCode/capabilityCount/requiredCapabilities/source) to use this computed count so the snapshot reflects actual required/probed capability numbers.tools/run-live-validation.ps1-445-456 (1)
445-456:⚠️ Potential issue | 🟡 MinorPrior
$fatalError(from repro-bundle emission) is silently overwritten by the hard gateWhen
collect-mod-repro-bundle.ps1orvalidate-repro-bundle.ps1fails, line 441 sets$fatalErrorand$bundleClassificationremains"". The hard gate then enters theelseif ([string]::IsNullOrWhiteSpace($bundleClassification))branch and replaces$fatalErrorwith a generic "Hard gate could not determine repro-bundle classification." message. The original failure reason (e.g.,"collect-mod-repro-bundle.ps1 failed with exit code 1") is lost from the thrown exception at line 567. TheWrite-Warningat line 440 still prints it, but that context is gone by the time a CI system surfaces the error.🛡️ Proposed fix — skip hard gate when a prior fatal error exists
if ($RequireNonBlockedClassification) { if (-not $EmitReproBundle) { $fatalError = [InvalidOperationException]::new("-RequireNonBlockedClassification requires repro-bundle emission.") - } - elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) { + } + elseif ($null -ne $fatalError) { + # A prior fatal error already explains why classification is absent; preserve the original. + } + elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) { $fatalError = [InvalidOperationException]::new("Hard gate could not determine repro-bundle classification.") } elseif ($bundleClassification -in @("blocked_environment", "blocked_profile_mismatch")) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/run-live-validation.ps1` around lines 445 - 456, The hard-gate logic (the if ($RequireNonBlockedClassification) block) overwrites a previously-set $fatalError from repro-bundle steps; guard this block so it only runs when no prior fatal error exists. Modify the check to skip the entire hard-gate branch if $fatalError is already set (e.g., wrap the current block in an outer if (-not $fatalError) or equivalent), referencing the existing symbols $RequireNonBlockedClassification, $EmitReproBundle, $bundleClassification and $fatalError so the original exception from collect-mod-repro-bundle.ps1 / validate-repro-bundle.ps1 is preserved.src/SwfocTrainer.Core/Contracts/IBinaryFingerprintService.cs-10-17 (1)
10-17:⚠️ Potential issue | 🟡 MinorAdd XML doc comments to all four
CaptureFromPathAsyncoverloads.The build check flagged all four public members for missing XML documentation. The interface-level
<summary>exists but per-member docs are absent, which is inconsistent with the documentation requirement enforced by the build pipeline.📝 Suggested XML comments
+ /// <inheritdoc cref="CaptureFromPathAsync(string, int, CancellationToken)"/> Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath); + /// <inheritdoc cref="CaptureFromPathAsync(string, int, CancellationToken)"/> Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath, CancellationToken cancellationToken); + /// <inheritdoc cref="CaptureFromPathAsync(string, int, CancellationToken)"/> Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath, int processId); + /// <summary> + /// Captures a deterministic binary fingerprint from <paramref name="modulePath"/> + /// using the specified <paramref name="processId"/> for live-process context. + /// </summary> Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath, int processId, CancellationToken cancellationToken);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Core/Contracts/IBinaryFingerprintService.cs` around lines 10 - 17, Add XML documentation to each of the four CaptureFromPathAsync overloads in IBinaryFingerprintService: provide a <summary> describing the purpose (capture a BinaryFingerprint from a module path), add <param> tags for modulePath and where applicable processId and cancellationToken (describe expected values and behavior), and include a <returns> describing the returned Task<BinaryFingerprint>. Ensure each overload (CaptureFromPathAsync(string), CaptureFromPathAsync(string, CancellationToken), CaptureFromPathAsync(string, int), CaptureFromPathAsync(string, int, CancellationToken)) has its own specific XML doc so the public members satisfy the build documentation rules.src/SwfocTrainer.Core/Contracts/ICapabilityMapResolver.cs-10-21 (1)
10-21:⚠️ Potential issue | 🟡 MinorAdd XML doc comments to the
ResolveAsyncoverloads.The build check flagged lines 10 and 16 for missing XML documentation on the public
ResolveAsyncoverloads.📝 Suggested XML comments
+ /// <inheritdoc cref="ResolveAsync(BinaryFingerprint, string, string, IReadOnlySet{string}, CancellationToken)"/> Task<CapabilityResolutionResult> ResolveAsync( BinaryFingerprint fingerprint, string requestedProfileId, string operationId, IReadOnlySet<string> resolvedAnchors); + /// <summary> + /// Resolves operation capability state from fingerprint-mapped anchor requirements. + /// </summary> Task<CapabilityResolutionResult> ResolveAsync( BinaryFingerprint fingerprint, string requestedProfileId, string operationId, IReadOnlySet<string> resolvedAnchors, CancellationToken cancellationToken);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Core/Contracts/ICapabilityMapResolver.cs` around lines 10 - 21, Add XML doc comments to both ResolveAsync overloads in ICapabilityMapResolver: document the method purpose, the returned Task<CapabilityResolutionResult>, and each parameter (fingerprint of type BinaryFingerprint, requestedProfileId, operationId, resolvedAnchors) and for the second overload include the CancellationToken parameter; make sure the summary and param/returns tags are present and match the method signatures so the public API has proper XML documentation.src/SwfocTrainer.Core/Services/SdkOperationRouter.cs-260-263 (1)
260-263:⚠️ Potential issue | 🟡 Minor
GetValue<string>()throwsInvalidOperationExceptionfor non-stringJsonNodeelements.If a caller populates
resolvedAnchorsas aJsonArraycontaining numeric, boolean, or nested object nodes,node?.GetValue<string>()throws, and the exception propagates out ofExtractResolvedAnchorsthroughExecuteAsyncwithout a diagnostics sink write.🛡️ Proposed fix
-return CreateAnchorSet(jsonArray.Select(node => node?.GetValue<string>())); +return CreateAnchorSet(jsonArray.Select(node => + node is JsonValue jv && jv.TryGetValue<string>(out var s) ? s : null));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Core/Services/SdkOperationRouter.cs` around lines 260 - 263, The current JsonArray handling in ExtractResolvedAnchors uses node?.GetValue<string>() which throws for non-string JsonNode elements; update the Select to safely extract a string for each node by calling node?.GetValue<object>() and converting that to a string via Convert.ToString(...)/ToString() (preserving nulls) or falling back to node?.ToString() for complex nodes, then feed that IEnumerable<string?> into CreateAnchorSet so numeric, boolean and object nodes no longer throw; ensure this change is applied in the JsonArray branch around CreateAnchorSet so ExecuteAsync no longer propagates InvalidOperationException from GetValue<string>().tools/native/resolve-cmake.ps1-52-57 (1)
52-57:⚠️ Potential issue | 🟡 Minor
"Visual Studio 18 2026"generator requires CMake ≥ 4.2 — the result DTO doesn't communicate this constraint.The
Visual Studio 18 2026generator was added in CMake 4.2. The lowest CMake version that supports this generator is CMake 4.2. The result object returned by this script carriesrecommendedGeneratorbut provides nominCmakeVersionhint. The caller (build-native.ps1) cannot distinguish a version-incompatible generator from a valid one without parsingcmake --versionseparately. Consider adding aminCmakeVersionfield alongsiderecommendedGeneratorso callers can fail fast with a clear message.♻️ Add minCmakeVersion to the generator map
switch ($major) { - 18 { return "Visual Studio 18 2026" } - 17 { return "Visual Studio 17 2022" } - 16 { return "Visual Studio 16 2019" } - default { return $null } + 18 { return [PSCustomObject]@{ Name = "Visual Studio 18 2026"; MinCmakeVersion = "4.2" } } + 17 { return [PSCustomObject]@{ Name = "Visual Studio 17 2022"; MinCmakeVersion = "3.14" } } + 16 { return [PSCustomObject]@{ Name = "Visual Studio 16 2019"; MinCmakeVersion = "3.14" } } + default { return $null } }Then propagate
minCmakeVersionthrough the$resultobject sobuild-native.ps1can gate on it before invoking CMake.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/native/resolve-cmake.ps1` around lines 52 - 57, Update the result DTO from resolve-cmake.ps1 to include a minCmakeVersion alongside recommendedGenerator so callers can check compatibility; modify the switch($major) mapping that returns "Visual Studio 18 2026"/"Visual Studio 17 2022"/"Visual Studio 16 2019" to instead return an object/hashtable containing both generator and minCmakeVersion (e.g., for major 18 set minCmakeVersion "4.2"), then ensure the script populates $result.minCmakeVersion in addition to $result.recommendedGenerator, and update build-native.ps1 to read $result.minCmakeVersion before invoking cmake to fail fast on incompatible CMake versions.tools/native/resolve-cmake.ps1-75-77 (1)
75-77:⚠️ Potential issue | 🟡 MinorDisk-based fallback regex
^\d{2}$never matches real VS installation directories.Visual Studio installs under 4-digit year subdirectories (
2019,2022,2026), not 2-digit version numbers. The^\d{2}$filter always yields an empty$majorDirscollection, making$instancespermanently empty and causingResolve-VisualStudioMetadataFromDiskto always return null metadata. The fallback is fully inoperative for every standard VS layout.Practical impact is low (vswhere is always present when VS is installed), but the fallback is dead code on any real machine.
🐛 Fix the year-directory regex and version extraction
$majorDirs = Get-ChildItem -Path $root -Directory -ErrorAction SilentlyContinue | - Where-Object { $_.Name -match '^\d{2}$' } | + Where-Object { $_.Name -match '^\d{4}$' } | Sort-Object Name -DescendingAdditionally update the
productLineVersionderivation so it feeds the correct major number intoResolve-RecommendedGenerator(which expects 16/17/18, not a 4-digit year):$instances.Add([PSCustomObject]@{ instancePath = $edition.FullName - productLineVersion = $majorDir.Name + productLineVersion = switch ([int]$majorDir.Name) { + 2026 { "18" } + 2022 { "17" } + 2019 { "16" } + default { $majorDir.Name } + } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/native/resolve-cmake.ps1` around lines 75 - 77, The disk-fallback currently filters directories with '^\d{2}$' so $majorDirs is empty; change the regex to match 4-digit year folders (e.g. '^\d{4}$') and update the productLineVersion derivation so Resolve-RecommendedGenerator receives the VS major number (map year -> major, e.g. 2019 -> 16, 2022 -> 17, 2026 -> 18) when building $productLineVersion used by Resolve-RecommendedGenerator in Resolve-VisualStudioMetadataFromDisk so $instances is populated correctly.tools/native/build-native.ps1-79-82 (1)
79-82:⚠️ Potential issue | 🟡 MinorCMake minimum version not validated before using the
Visual Studio 18 2026generator.The lowest CMake version that supports the Visual Studio 18 2026 generator is CMake 4.2. The script runs
cmake --versionfor a liveness check but never parses the version number. When VS 18 is detected byresolve-cmake.ps1and a CMake < 4.2 binary is onPATH, the configure step fails and the script surfaces only the generic"native configure failed for Windows mode"message, giving no hint that the CMake version is the culprit. Add a minimum-version gate when the selected generator is"Visual Studio 18 2026".🛡️ Proposed version gate in `Invoke-WindowsBuild`
& $CmakePath --version if ((Get-LastExitCodeOrZero) -ne 0) { throw "cmake --version failed for '$CmakePath'." } + + # VS 18 2026 generator requires CMake >= 4.2 + if ($RecommendedGenerator -like "Visual Studio 18*") { + $verLine = (& $CmakePath --version 2>&1 | Select-String '[\d]+\.[\d]+\.[\d]+' | Select-Object -First 1) + if ($verLine -match '(\d+)\.(\d+)') { + $maj = [int]$Matches[1]; $min = [int]$Matches[2] + if ($maj -lt 4 -or ($maj -eq 4 -and $min -lt 2)) { + throw "[CMAKE_VERSION_INSUFFICIENT] Generator '$RecommendedGenerator' requires CMake >= 4.2 but found $maj.$min." + } + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/native/build-native.ps1` around lines 79 - 82, When the Windows generator "Visual Studio 18 2026" is selected (resolved in resolve-cmake.ps1 and used in Invoke-WindowsBuild), add a minimum CMake version check after invoking $CmakePath --version: parse the semantic version from the output (or call cmake --version and extract the major.minor.patch), compare it against 4.2.0 and throw a clear error if the installed CMake is older; update the existing liveness check that uses Get-LastExitCodeOrZero and the $CmakePath variable so that it fails with a specific message like "CMake >= 4.2 required for Visual Studio 18 2026 generator" instead of the generic configure failure.docs/pr49-remediation-checklist.md-64-73 (1)
64-73:⚠️ Potential issue | 🟡 Minor
psscriptanalyzer_psavoidusingwritehostentries for the reviewed scripts appear stale.
normalize-signature-pack.ps1,capture-binary-fingerprint.ps1,extract-pe-metadata.ps1, andgenerate-signature-candidates.ps1each useWrite-Output, notWrite-Host. These four entries (4 of the 20 total) may reflect a pre-remediation snapshot. Consider regenerating the checklist or noting which items are already resolved.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/pr49-remediation-checklist.md` around lines 64 - 73, The checklist entry for the rule psscriptanalyzer_psavoidusingwritehost is stale: update the remediation checklist to reflect that normalize-signature-pack.ps1, capture-binary-fingerprint.ps1, extract-pe-metadata.ps1, and generate-signature-candidates.ps1 use Write-Output (not Write-Host) by regenerating or re-scanning the scripts and removing or marking these four files as resolved; ensure the tools/native/*.ps1 and tools/research/run-capability-intel.ps1 entries remain accurate while regenerating the report so the listed files (tools/native/build-native.ps1, tools/native/resolve-cmake.ps1, tools/research/run-capability-intel.ps1, tools/run-live-validation.ps1) are only included if they truly violate the rule.docs/EXTERNAL_TOOLS_SETUP.md-98-103 (1)
98-103:⚠️ Potential issue | 🟡 MinorEnv var override example shows the default value — not useful.
Setting
SWFOC_EXTENDER_PIPE_NAMEto"SwfocExtenderBridge"(the default) doesn't illustrate an override scenario. Readers expecting a custom-value example will be confused.📝 Suggested fix
-$env:SWFOC_EXTENDER_PIPE_NAME = "SwfocExtenderBridge" +$env:SWFOC_EXTENDER_PIPE_NAME = "MyCustomExtenderPipe"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/EXTERNAL_TOOLS_SETUP.md` around lines 98 - 103, The example currently sets SWFOC_EXTENDER_PIPE_NAME to the default "SwfocExtenderBridge", which is misleading; update the docs (the section that mentions SwfocExtender.Host and the named pipe SwfocExtenderBridge) to show a real override by replacing the example value with a custom pipe name (e.g., MyCustomPipe) and add a short note that this environment variable changes the named pipe the host listens on so readers see how to override the default.docs/LIVE_VALIDATION_RUNBOOK.md-74-80 (1)
74-80:⚠️ Potential issue | 🟡 Minor"Required for runtime-affecting changes" conflicts with the schema's unconditional requirement.
tools/schemas/repro-bundle.schema.json(lines 14–18) adds all five sections (selectedHostProcess,backendRouteDecision,capabilityProbeSnapshot,hookInstallReport,overlayState) to the top-levelrequiredarray unconditionally — every bundle of any scope must include them. Labelling them "required for runtime-affecting changes" here implies they're optional for non-runtime changes, which is incorrect and will mislead consumers generating bundles.Consider updating the heading to: "vNext bundle sections (required in all v1.1 bundles)" and removing the qualifier.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/LIVE_VALIDATION_RUNBOOK.md` around lines 74 - 80, Update the heading in LIVE_VALIDATION_RUNBOOK.md to reflect that the five vNext bundle sections (`selectedHostProcess`, `backendRouteDecision`, `capabilityProbeSnapshot`, `hookInstallReport`, `overlayState`) are unconditionally required by the schema; replace the phrase "required for runtime-affecting changes" with something like "vNext bundle sections (required in all v1.1 bundles)" and remove the qualifier that implies conditionality, and ensure the note references the schema's top-level `required` array in tools/schemas/repro-bundle.schema.json so consumers are not misled.tools/fixtures/repro_bundle_sample.json-48-52 (1)
48-52:⚠️ Potential issue | 🟡 Minor
hookInstallReport.stateandreasonCodeare semantically inconsistent.
state: "unknown"implies the hook install outcome is undetermined, whilereasonCode: "HOOK_INSTALL_FAILED"asserts a definite failure. These contradict each other. A consistent sample would pair a known outcome with a matching reason code:🔧 Option A — known failure state
"hookInstallReport": { - "state": "unknown", + "state": "failed", "reasonCode": "HOOK_INSTALL_FAILED", "details": "Not captured in sample." },🔧 Option B — genuinely unknown state
"hookInstallReport": { "state": "unknown", - "reasonCode": "HOOK_INSTALL_FAILED", + "reasonCode": "HOOK_STATE_UNKNOWN", "details": "Not captured in sample." },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/fixtures/repro_bundle_sample.json` around lines 48 - 52, The hookInstallReport object currently has mismatched semantics between the state and reasonCode fields; update hookInstallReport so state and reasonCode are consistent—for a known failure set "state" to "failed" (or "error") and keep reasonCode "HOOK_INSTALL_FAILED" and a meaningful details string, or for an unknown outcome keep "state" as "unknown" and change/remove reasonCode (e.g., use "UNKNOWN" or omit reasonCode) and update details accordingly; modify the hookInstallReport.{state,reasonCode,details} values to follow one of these two consistent options.README.md-136-145 (1)
136-145:⚠️ Potential issue | 🟡 MinorResearch tooling section needs an explicit Windows-only note.
All scripts in this section operate on Windows PE binaries (swfoc.exe) and require a live Windows game installation. Per project conventions, Windows target assumptions must be called out explicitly in documentation. Additionally, the output paths on lines 141–142 reference
TestResults/research/<runId>/, which mirrors the per-run path deviation flagged in the script itself.🔧 Suggested header update
-## R&D Research Tooling (Deep RE Track) +## R&D Research Tooling (Deep RE Track — Windows only)As per coding guidelines: "Windows live target assumptions must be kept explicit in documentation and script help."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 136 - 145, Add an explicit Windows-only note to the "R&D Research Tooling (Deep RE Track)" section clarifying these scripts operate on Windows PE binaries and require a live Windows game installation; update the header and/or a short paragraph above the code block to state "Windows-only: requires a live Windows installation and targets PE binaries (e.g., swfoc.exe)". Also correct the output path guidance around generate-signature-candidates.ps1 and normalize-signature-pack.ps1 to avoid implying a static per-run location by referencing the runId placeholder consistently (e.g., TestResults/research/<runId>/...) and mention that runId varies per execution. Ensure the note references the exact script names shown (run-capability-intel.ps1, capture-binary-fingerprint.ps1, generate-signature-candidates.ps1, normalize-signature-pack.ps1) so readers know which tools are Windows-only.TestResults/runs/20260218-213838/repro-bundle.md-21-22 (1)
21-22:⚠️ Potential issue | 🟡 MinorScripting artifact —
.Replace('|','/')expression emitted verbatim, causing malformed table rows.The markdown generator appended the PowerShell method-call text literally rather than executing it. The
|characters inside the single-quoted argument'|'are parsed as column separators, producing a 7-cell row in a 6-column table (confirmed by markdownlint MD056). Most renderers will silently drop the extra cell.The corresponding JSON (
repro-bundle.jsonlines 11 and 25) shows neither command line actually contains a|, so the replacement was superfluous — the trailing expression can simply be stripped.🔧 Suggested fix
-| 45908 | swfoc.exe | unknown | 2011.81 | 3447786229,1397421866 | "[REDACTED_GAME_PATH]\swfoc.exe" STEAMMOD=3447786229 STEAMMOD=1397421866 .Replace('|','/') | +| 45908 | swfoc.exe | unknown | 2011.81 | 3447786229,1397421866 | "[REDACTED_GAME_PATH]\swfoc.exe" STEAMMOD=3447786229 STEAMMOD=1397421866 | -| 32952 | StarWarsG.exe | game_host | 2223.37 | 3447786229,1397421866 | StarWarsG.exe STEAMMOD=3447786229 STEAMMOD=1397421866 NOARTPROCESS IGNOREASSERTS.Replace('|','/') | +| 32952 | StarWarsG.exe | game_host | 2223.37 | 3447786229,1397421866 | StarWarsG.exe STEAMMOD=3447786229 STEAMMOD=1397421866 NOARTPROCESS IGNOREASSERTS |If future command lines do contain
|, escape them as\|in the generator instead of calling.Replace()on the rendered string.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestResults/runs/20260218-213838/repro-bundle.md` around lines 21 - 22, The markdown generator emitted the literal PowerShell call ".Replace('|','/')" into repro-bundle.md, producing malformed table rows; remove the trailing ".Replace('|','/')" expression from the generated command-line strings (the entries for swfoc.exe and StarWarsG.exe) so the renderer receives the actual command text (verify against repro-bundle.json lines showing no '|' present), and instead implement escaping of any pipe characters at generation time by replacing '|' with '\|' before writing Markdown if pipes can appear in future command lines.
| class CapabilityProbe { | ||
| public: | ||
| CapabilityProbe() = default; | ||
|
|
||
| void markAvailable(const std::string& featureId, const std::string& reasonCode = "CAPABILITY_PROBE_PASS"); | ||
| bool isAvailable(const std::string& featureId) const; | ||
| const std::unordered_map<std::string, CapabilityEntry>& snapshot() const noexcept; | ||
|
|
||
| private: | ||
| std::unordered_map<std::string, CapabilityEntry> capabilities_; | ||
| }; |
There was a problem hiding this comment.
capabilities_ map is not thread-safe — same data-race risk as HookLifecycleManager.
markAvailable (write) and isAvailable/snapshot (reads) on the same object from different threads constitute a data race under the C++ memory model. Add a mutable std::shared_mutex mutex_ and use std::unique_lock for writes and std::shared_lock for reads — mirroring the fix suggested for HookLifecycleManager.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`
around lines 26 - 36, The capabilities_ map in CapabilityProbe is vulnerable to
data races when markAvailable (writer) and isAvailable/snapshot (readers) run
concurrently; add a mutable std::shared_mutex member (e.g., mutable
std::shared_mutex mutex_) to the class and protect markAvailable with a
std::unique_lock<std::shared_mutex> while protecting isAvailable and snapshot
with std::shared_lock<std::shared_mutex>, ensuring safe concurrent access to
capabilities_ and preserving return types (const snapshot should still return
const std::unordered_map<std::string, CapabilityEntry>&).
| class HookLifecycleManager { | ||
| public: | ||
| HookLifecycleManager() = default; | ||
|
|
||
| void markInstalled(const std::string& hookId); | ||
| void markFailed(const std::string& hookId, const std::string& reasonCode); | ||
| void markRolledBack(const std::string& hookId); | ||
| HookRecord get(const std::string& hookId) const; | ||
|
|
||
| private: | ||
| std::unordered_map<std::string, HookRecord> hooks_; | ||
| }; |
There was a problem hiding this comment.
No thread-safety on hooks_ — potential data race in hook/game-thread contexts.
HookLifecycleManager is stateful and likely accessed from both the game thread (hook callbacks) and the trainer/management thread. std::unordered_map is not thread-safe for concurrent writes or mixed read/write access; concurrent markInstalled/markFailed/get calls cause undefined behaviour.
Consider protecting hooks_ with a std::shared_mutex (C++17), using a reader-writer lock pattern to allow concurrent reads via get:
🔒 Proposed thread-safety fix
`#pragma` once
`#include` <string>
`#include` <unordered_map>
+#include <shared_mutex>
...
class HookLifecycleManager {
public:
HookLifecycleManager() = default;
void markInstalled(const std::string& hookId);
void markFailed(const std::string& hookId, const std::string& reasonCode);
void markRolledBack(const std::string& hookId);
HookRecord get(const std::string& hookId) const;
private:
std::unordered_map<std::string, HookRecord> hooks_;
+ mutable std::shared_mutex mutex_;
};Then in .cpp — write methods acquire std::unique_lock<std::shared_mutex>, get() acquires std::shared_lock<std::shared_mutex>.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp`
around lines 26 - 37, The hooks_ map in HookLifecycleManager is unprotected and
can race across threads; add a std::shared_mutex member (e.g., mutex_) and
include <shared_mutex>, then change the writer methods markInstalled,
markFailed, markRolledBack to acquire a std::unique_lock<std::shared_mutex>
before mutating hooks_, and change get to acquire a
std::shared_lock<std::shared_mutex> for read-only access so concurrent readers
are allowed while writers are exclusive.
| private static ExtenderResult ParseResponse(string commandId, string? responseLine) | ||
| { | ||
| if (string.IsNullOrWhiteSpace(responseLine)) | ||
| { | ||
| return CreateNoResponseResult(commandId); | ||
| } | ||
|
|
||
| var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions); | ||
| return result ?? CreateInvalidResponseResult(commandId); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
System.Text.Json JsonSerializer.Deserialize JsonException malformed JSON
💡 Result:
System.Text.Json.JsonSerializer.Deserialize(...) throws JsonException when either the JSON text isn’t valid JSON, there’s extra/trailing data beyond a single JSON value, or the JSON shape/types don’t match the .NET target type. The exception message typically includes Path, LineNumber, and BytePositionInLine, which tell you exactly where parsing/conversion failed. [1][2]
1) First: confirm you’re actually deserializing JSON
A very common cause is that the payload is empty, truncated, or not JSON at all (for example an HTML error page, proxy response, 429/500 text, etc.). This often shows up as errors like “invalid start of a value” (for example '<…') or “input does not contain any JSON tokens”. [4][5]
Debug pattern (HTTP): log the raw response before deserializing.
var body = await response.Content.ReadAsStringAsync();
try
{
return JsonSerializer.Deserialize<MyType>(body);
}
catch (JsonException ex)
{
throw new InvalidOperationException(
$"JSON parse failed: {ex.Message}\nRaw body:\n{body}", ex);
}2) If it’s “almost JSON”: enable the specific allowances you need
By default, System.Text.Json rejects comments and trailing commas. You can allow them via options. [3]
var options = new JsonSerializerOptions
{
ReadCommentHandling = JsonCommentHandling.Skip,
AllowTrailingCommas = true,
};
var obj = JsonSerializer.Deserialize<MyType>(json, options);3) If JSON is valid but doesn’t match your .NET type
You’ll get JsonException like “The JSON value could not be converted to …” with a Path pointing to the offending property. [2]
Typical fixes:
- Change your model property types / nullability to match payload.
- If numbers come as strings (or vice versa), use
JsonNumberHandling(or a custom converter). Microsoft docs cover handling “invalid” forms like quoted numbers. [3]
4) If you have multiple top-level values / trailing junk
Deserialize expects one JSON value (one object/array/etc.). Extra data after it causes failure. If you truly need to read multiple values, use Utf8JsonReader configured with AllowMultipleValues. [2][5]
If you paste the exact exception message (including Path/LineNumber/BytePositionInLine) and a small sample of the JSON around that position, I can pinpoint the cause and the minimal fix.
Sources: [1] https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/converters-how-to
[2] https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializer.deserialize
[3] https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/invalid-json
[4] https://learn.microsoft.com/en-us/answers/questions/2201902/jsonreaderexception-when-trying-to-connect-use-azu
[5] https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/use-utf8jsonreader
🏁 Script executed:
# Find the SendCoreAsync method to understand exception handling context
fd "NamedPipeExtenderBackend" -t fRepository: Prekzursil/SWFOC-Mod-Menu
Length of output: 196
🏁 Script executed:
# Search for SendCoreAsync and exception handling
rg "SendCoreAsync" src/SwfocTrainer.Runtime --context 15 -A 10Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 2480
🏁 Script executed:
# Check how ParseResponse is called and what catches exceptions
rg "ParseResponse" src/SwfocTrainer.Runtime -B 5 -A 5Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 2271
🏁 Script executed:
# Get the full SendCoreAsync method
rg "private static async Task<ExtenderResult> SendCoreAsync" src/SwfocTrainer.Runtime -A 50Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 5102
🏁 Script executed:
# Look for catch blocks in SendCoreAsync and surrounding code
rg "catch|SendCoreAsync" src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs -B 2 -A 8Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 1768
🏁 Script executed:
# Find all CreateXxxResult methods to understand what states map to what
rg "Create.*Result\(" src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs -B 2 -A 5Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 1808
🏁 Script executed:
# Check the entire file to understand exception flow and result creation
cat -n src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs | head -150Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 6524
🏁 Script executed:
# Get full implementation of Create*Result methods to see HookState values
rg "CreateInvalidResponseResult|CreateUnreachableResult|CreateTimeoutResult|CreateNoResponseResult" src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs -A 10Repository: Prekzursil/SWFOC-Mod-Menu
Length of output: 2480
Catch JsonException in ParseResponse to avoid triggering unnecessary host restarts.
When JsonSerializer.Deserialize encounters malformed JSON, it throws JsonException. This exception bubbles up through SendCoreAsync's generic catch (Exception) handler and returns CreateUnreachableResult with HookState: "unreachable". In SendAsync, the "unreachable" state triggers the retry logic that restarts the bridge host process—even though the backend is reachable; the problem is just a malformed response.
Catch JsonException directly in ParseResponse and map it to CreateInvalidResponseResult, which returns HookState: "invalid_response" and will not trigger an unnecessary host restart.
🔧 Suggested fix
private static ExtenderResult ParseResponse(string commandId, string? responseLine)
{
if (string.IsNullOrWhiteSpace(responseLine))
{
return CreateNoResponseResult(commandId);
}
-
- var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
- return result ?? CreateInvalidResponseResult(commandId);
+ try
+ {
+ var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
+ return result ?? CreateInvalidResponseResult(commandId);
+ }
+ catch (JsonException)
+ {
+ return CreateInvalidResponseResult(commandId);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private static ExtenderResult ParseResponse(string commandId, string? responseLine) | |
| { | |
| if (string.IsNullOrWhiteSpace(responseLine)) | |
| { | |
| return CreateNoResponseResult(commandId); | |
| } | |
| var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions); | |
| return result ?? CreateInvalidResponseResult(commandId); | |
| } | |
| private static ExtenderResult ParseResponse(string commandId, string? responseLine) | |
| { | |
| if (string.IsNullOrWhiteSpace(responseLine)) | |
| { | |
| return CreateNoResponseResult(commandId); | |
| } | |
| try | |
| { | |
| var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions); | |
| return result ?? CreateInvalidResponseResult(commandId); | |
| } | |
| catch (JsonException) | |
| { | |
| return CreateInvalidResponseResult(commandId); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs` around lines
240 - 249, In ParseResponse, catch System.Text.Json.JsonException around the
JsonSerializer.Deserialize call and return
CreateInvalidResponseResult(commandId) when deserialization fails; keep the
existing null/whitespace check that returns CreateNoResponseResult(commandId)
and the existing path that returns the non-null result, but ensure any
JsonException does not propagate (so it won't reach SendCoreAsync's generic
catch) by handling it inside ParseResponse and mapping to
CreateInvalidResponseResult.
| | aotr_1397421866_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> | | ||
| | roe_3447786229_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> | |
There was a problem hiding this comment.
Evidence table contains only unfilled template placeholders — this artifact provides no evidentiary value.
Both profile rows (aotr_1397421866_swfoc and roe_3447786229_swfoc) still hold the verbatim template strings <fill pid/mode/reasonCode>, <pass/fail/skip + reason>, and <overall>. The file is committed as a live-run evidence artifact, but none of the per-profile attach diagnostics, workflow outcomes, or overall results have been populated. The document does not substantiate the claims made in the PR about the AOTR/ROE run.
These cells must be filled with the actual values captured during run 20260218-213838 (pid, mode, reasonCode from the capability probe snapshot; pass/fail/skip with reason codes from the tactical toggle and hero helper workflows; and the derived overall result per profile) before this file can serve as evidence.
📝 Example of a filled row (replace with actual values)
-| aotr_1397421866_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> |
-| roe_3447786229_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> |
+| aotr_1397421866_swfoc | pid=12345 / Sdk / CAPABILITY_PROBE_PASS | pass | pass | pass |
+| roe_3447786229_swfoc | pid=12345 / Sdk / CAPABILITY_PROBE_PASS | pass | pass | pass |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@TestResults/runs/20260218-213838/issue-19-evidence-template.md` around lines
9 - 10, The evidence table contains unfilled placeholders for the two profile
rows aotr_1397421866_swfoc and roe_3447786229_swfoc; replace each template cell
(<fill pid/mode/reasonCode>, <pass/fail/skip + reason>, <overall>) with the
actual captured values from run 20260218-213838: fill pid/mode/reasonCode with
the capability probe snapshot values, replace the two workflow result cells with
the tactical toggle and hero helper outcomes including reason codes (e.g.,
"pass: reason" or "fail: reason"), and compute/populate the derived overall
result for each profile so the table reflects real evidence.
| param( | ||
| [ValidateSet("Auto", "Windows", "Wsl")][string]$Mode = "Auto", | ||
| [string]$Configuration = "Release", | ||
| [string]$BuildDir = "native/build-win-vs" | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Missing deterministic smoke/fixture mode (guideline violation).
This script has concrete side effects (configure, compile, artifact copy) but exposes no -WhatIf or -DryRun path and has no associated deterministic smoke command documented in a script help block. As per coding guidelines, every tooling script change must include a deterministic fixture or smoke command.
♻️ Minimal addition: add a -DryRun switch and script help
+<#
+.SYNOPSIS
+ Orchestrates the native SwfocExtender build (Windows VS/CMake or WSL).
+.PARAMETER Mode
+ Build mode: Auto (default), Windows, Wsl.
+.PARAMETER Configuration
+ CMake build configuration (default: Release).
+.PARAMETER BuildDir
+ Output directory for the Windows build (default: native/build-win-vs).
+.PARAMETER DryRun
+ Emit planned actions without executing CMake or copying artifacts.
+.NOTES
+ Windows build requires CMake >= 4.2 when Visual Studio 18 (2026) is detected.
+ WSL mode requires WSL installed and configured; paths are converted via wsl.exe wslpath.
+.EXAMPLE
+ # Smoke-test: verify cmake/VS detection without building
+ .\build-native.ps1 -DryRun
+#>
param(
[ValidateSet("Auto", "Windows", "Wsl")][string]$Mode = "Auto",
[string]$Configuration = "Release",
- [string]$BuildDir = "native/build-win-vs"
+ [string]$BuildDir = "native/build-win-vs",
+ [switch]$DryRun
)Pass $DryRun into Invoke-WindowsBuild and Invoke-WslBuild to gate the actual cmake / wsl.exe invocations.
As per coding guidelines, every tooling script change must include a deterministic fixture or smoke command and Windows live target assumptions must be kept explicit in documentation and script help.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/native/build-native.ps1` around lines 1 - 5, Add a deterministic
smoke/dry-run path and a script help block: update the param block to include a
[switch]$DryRun and document the script with a comment-based help header
describing Mode, Configuration, BuildDir and the new DryRun switch; then
propagate $DryRun into the build invocations so that Invoke-WindowsBuild and
Invoke-WslBuild respect it and gate any real cmake/wsl.exe or artifact copy
operations (i.e., when $DryRun is set, run only the deterministic smoke
steps/logging and skip side-effecting commands).
| $outputDir = Join-Path $repoRoot "TestResults/research/$RunId" | ||
| New-Item -ItemType Directory -Path $outputDir -Force | Out-Null | ||
| $OutputPath = Join-Path $outputDir "signature-pack.json" |
There was a problem hiding this comment.
Output path TestResults/research/$RunId violates the TestResults/runs/<runId>/ mandate.
Same issue as in capture-binary-fingerprint.ps1. Migrate to TestResults/runs/$RunId/research/ so artifacts are discoverable by downstream validation and repro-bundle tooling.
As per coding guidelines: "Per-run output isolation must be maintained under TestResults/runs/<runId>/ directory structure".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/research/generate-signature-candidates.ps1` around lines 30 - 32, The
script currently sets $outputDir to TestResults/research/$RunId which violates
per-run isolation; change $outputDir to join TestResults/runs/$RunId/research
(e.g., replace "TestResults/research/$RunId" with
"TestResults/runs/$RunId/research") and ensure $OutputPath (signature-pack.json)
is built from that new $outputDir; update any New-Item or downstream uses that
reference $outputDir so artifacts are created under
TestResults/runs/<runId>/research.
| $anchorIds = @($anchors | ForEach-Object { $_.id } | Sort-Object -Unique) | ||
|
|
||
| $operations = [ordered]@{ | ||
| list_selected = [ordered]@{ requiredAnchors = @(); optionalAnchors = @("selected_hp", "selected_shield", "selected_owner_faction") } | ||
| list_nearby = [ordered]@{ requiredAnchors = @(); optionalAnchors = @("selected_hp", "selected_shield", "selected_owner_faction") } | ||
| spawn = [ordered]@{ requiredAnchors = @("spawn_point_write"); optionalAnchors = @("planet_owner") } | ||
| kill = [ordered]@{ requiredAnchors = @("selected_hp"); optionalAnchors = @("selected_shield") } | ||
| set_owner = [ordered]@{ requiredAnchors = @("selected_owner_faction"); optionalAnchors = @("planet_owner") } | ||
| teleport = [ordered]@{ requiredAnchors = @("selected_speed"); optionalAnchors = @("selected_owner_faction") } | ||
| set_planet_owner = [ordered]@{ requiredAnchors = @("planet_owner"); optionalAnchors = @() } | ||
| set_hp = [ordered]@{ requiredAnchors = @("selected_hp"); optionalAnchors = @("selected_shield") } | ||
| set_shield = [ordered]@{ requiredAnchors = @("selected_shield"); optionalAnchors = @("selected_hp") } | ||
| set_cooldown = [ordered]@{ requiredAnchors = @("selected_cooldown_multiplier"); optionalAnchors = @("selected_speed") } | ||
| } |
There was a problem hiding this comment.
$anchorIds is computed but never used — missing cross-validation of operation anchors.
$anchorIds (line 53) accumulates all anchor IDs derived from the profile's signatureSets. It is never referenced again. The $operations block (lines 55–66) hardcodes anchor names like "spawn_point_write", "selected_hp", "planet_owner", etc. without checking that these IDs actually exist in $anchors. This produces a signature pack that may silently reference non-existent anchors, causing downstream validation failures.
The dead variable strongly indicates that a cross-validation step was planned but never implemented.
🐛 Proposed fix — add cross-validation after building $operations
$anchorIds = @($anchors | ForEach-Object { $_.id } | Sort-Object -Unique)
+
+# Cross-validate operation anchor references against the generated anchor set
+foreach ($opName in $operations.Keys) {
+ $op = $operations[$opName]
+ foreach ($ref in @($op.requiredAnchors) + @($op.optionalAnchors)) {
+ if ($ref -and $anchorIds -notcontains $ref) {
+ throw "[ERR_ANCHOR_REF_INVALID] Operation '$opName' references unknown anchor '$ref'"
+ }
+ }
+}🧰 Tools
🪛 PSScriptAnalyzer (1.24.0)
[warning] 53-53: The variable 'anchorIds' is assigned but never used.
(PSUseDeclaredVarsMoreThanAssignments)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/research/generate-signature-candidates.ps1` around lines 53 - 66,
$anchorIds is collected but never used, so implement a cross-validation step
after building $operations that verifies each operation's requiredAnchors and
optionalAnchors exist in $anchorIds; iterate the [ordered]$operations (symbol:
$operations) and for each operation check its requiredAnchors and
optionalAnchors against $anchorIds, emit processLogger/error or Write-Error for
any missing required anchors (and fail/exit or mark the operation invalid) and
emit warnings for missing optional anchors (or remove them), ensuring you
reference $anchors/$anchorIds when validating so no operation references
nonexistent anchor IDs.
| if ([string]::IsNullOrWhiteSpace($OutputRoot)) { | ||
| $OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId" | ||
| } |
There was a problem hiding this comment.
Output path TestResults/research/ violates the per-run isolation convention.
The default $OutputRoot is set to TestResults/research/$RunId but all tooling scripts in this repo are required to write per-run output under TestResults/runs/<runId>/.
🔧 Proposed fix
- $OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId"
+ $OutputRoot = Join-Path $repoRoot "TestResults/runs/$RunId"Based on learnings: "Applies to tools/**/*.{sh,bat,ps1,py,js,ts} : Per-run output isolation must be maintained under TestResults/runs// directory structure"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ([string]::IsNullOrWhiteSpace($OutputRoot)) { | |
| $OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId" | |
| } | |
| if ([string]::IsNullOrWhiteSpace($OutputRoot)) { | |
| $OutputRoot = Join-Path $repoRoot "TestResults/runs/$RunId" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/research/run-capability-intel.ps1` around lines 16 - 18, The default
$OutputRoot currently points to "TestResults/research/$RunId" which violates the
per-run isolation rule; update the assignment of $OutputRoot (when
[string]::IsNullOrWhiteSpace($OutputRoot)) to use the run-scoped directory under
TestResults/runs by setting $OutputRoot = Join-Path $repoRoot
"TestResults/runs/$RunId" (or "TestResults/runs/$RunId/research" if you want to
preserve the research subfolder) so all outputs are placed under the
TestResults/runs/<RunId>/ hierarchy; adjust any dependent uses of $OutputRoot if
needed.
| ) | ||
|
|
||
| Write-Host "=== Running $Name ===" | ||
| Write-Output "=== Running $Name ===" |
There was a problem hiding this comment.
Write-Output inside Invoke-LiveTest contaminates the function's return value — all TRX reads will silently fail
In PowerShell, Write-Output writes to the success output stream (stream 1), which is collected when a function's result is assigned to a variable. Write-Host (the previous implementation per the AI summary) writes to the host/information stream and is never collected this way.
Because Invoke-LiveTest is called with:
$executedTrx = Invoke-LiveTest -Name $test.Name ... # line 326$executedTrx becomes a two-element array:
@("=== Running Live Credits ===", "C:\...\live-credits.trx")
instead of the expected string path.
Read-TrxSummary declares [string]$TrxPath, so PowerShell coerces the array using $OFS (default: space):
$TrxPath = "=== Running Live Credits === C:\...\live-credits.trx"
Test-Path returns $false for this garbage path, the 120-second retry loop exhausts, and every test is recorded with Outcome = "Missing" — a silent, catastrophic regression with no thrown exception.
🐛 Proposed fix
- Write-Output "=== Running $Name ==="
+ Write-Host "=== Running $Name ==="🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/run-live-validation.ps1` at line 127, The call inside Invoke-LiveTest
that uses Write-Output ("=== Running $Name ===") contaminates the function's
returned value (seen by $executedTrx) — replace that Write-Output with a
host/information-only writer (e.g., Write-Host or Write-Information with
-InformationAction Continue) so the banner text is not emitted on the success
output stream; update the line in Invoke-LiveTest where "=== Running $Name ==="
is emitted and verify Read-TrxSummary still receives a single string path (no
extraneous output).
- remove PowerShell automatic-variable assignments in native and live-validation scripts - apply file-scoped cppcheck missingIncludeSystem suppression on native bridge/core/plugin files only - document targeted suppression rationale in remediation checklist Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0c4930d595
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (string.Equals(backendPreference, "helper", StringComparison.OrdinalIgnoreCase)) | ||
| { | ||
| return defaultBackend == ExecutionBackendKind.Save | ||
| ? ExecutionBackendKind.Save | ||
| : ExecutionBackendKind.Helper; |
There was a problem hiding this comment.
Restrict helper backend preference to helper actions
Routing every non-save action to ExecutionBackendKind.Helper when backendPreference is "helper" causes memory/code-patch actions to bypass real memory writes and return a dispatched helper success instead. In this commit, helper execution remains a dispatch stub (ExecuteHelperActionAsync) rather than a memory backend, so profiles that set backendPreference: "helper" can silently report success for actions like set_unit_cap without applying the mutation.
Useful? React with 👍 / 👎.
- Move cppcheck file suppressions to the top of affected native files - Replace designated initializers with explicit member assignment to improve analyzer visibility - Mark protocol/contract carrier members maybe_unused where read paths are deferred Co-authored-by: Codex <noreply@openai.com>
- Add file-level cppcheck unusedStructMember suppression only on native protocol headers - Keep suppression scoped to PR49 Codacy blocker files Co-authored-by: Codex <noreply@openai.com>
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (8)
native/SwfocExtender.Core/src/CapabilityProbe.cpp (1)
5-11:CapabilityState::Experimentalis unreachable via the public API.
markAvailableunconditionally setsstate = CapabilityState::Verified, leavingExperimentala dead enumerator. Callers probingsnapshot()can never observe anExperimentalentry through any current write path. If this state is intentional future scaffolding, document it; otherwise either expose amarkExperimentaloverload or remove the enumerator.♻️ Minimal fix: add an optional state parameter to `markAvailable`
In
CapabilityProbe.hpp:- void markAvailable(const std::string& featureId, const std::string& reasonCode = "CAPABILITY_PROBE_PASS"); + void markAvailable(const std::string& featureId, + const std::string& reasonCode = "CAPABILITY_PROBE_PASS", + CapabilityState state = CapabilityState::Verified);In
CapabilityProbe.cpp:-void CapabilityProbe::markAvailable(const std::string& featureId, const std::string& reasonCode) { +void CapabilityProbe::markAvailable(const std::string& featureId, const std::string& reasonCode, CapabilityState state) { CapabilityEntry entry {}; entry.available = true; - entry.state = CapabilityState::Verified; + entry.state = state; entry.reasonCode = reasonCode; capabilities_[featureId] = entry; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Core/src/CapabilityProbe.cpp` around lines 5 - 11, The current markAvailable(const std::string& featureId, const std::string& reasonCode) always sets entry.state = CapabilityState::Verified making CapabilityState::Experimental unreachable; change the API to allow callers to set the state by adding an optional state parameter (e.g., add a third parameter with default CapabilityState::Verified) in the declaration in CapabilityProbe.hpp and update the implementation in CapabilityProbe.cpp to use that parameter when assigning entry.state (leave CapabilityEntry, capabilities_ and snapshot() behavior unchanged); alternatively, if Experimental is not needed remove the enumerator and any references—otherwise add a new markExperimental helper that sets entry.state = CapabilityState::Experimental and sets capabilities_[featureId] accordingly.tools/research/generate-signature-candidates.ps1 (2)
74-74:sourceProfilePathembeds an absolute local filesystem path in the artifact.
[System.IO.Path]::GetFullPath($ProfilePath)resolves to a machine-specific absolute path (e.g.,C:\agents\…\profiles\…). This path will be baked into every generatedsignature-pack.json, making artifacts non-portable and potentially leaking CI agent or developer machine paths in shared repro bundles.♻️ Proposed fix — store a repo-relative path instead
- sourceProfilePath = [System.IO.Path]::GetFullPath($ProfilePath) + sourceProfilePath = [System.IO.Path]::GetFullPath($ProfilePath).Replace([System.IO.Path]::GetFullPath($repoRoot), '').TrimStart([System.IO.char]'\', [System.IO.char]'/')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/generate-signature-candidates.ps1` at line 74, sourceProfilePath currently uses [System.IO.Path]::GetFullPath($ProfilePath) which bakes an absolute, machine-specific path into signature-pack.json; change it to compute and store a repository-relative path instead (e.g., use [System.IO.Path]::GetRelativePath($RepoRoot, $ProfilePath) or compute a relative URI from $RepoRoot/$PSScriptRoot to $ProfilePath) so artifacts are portable and don't leak CI/dev paths; update the assignment to sourceProfilePath to produce the repo-relative string and ensure downstream consumers still resolve it relative to the repository root when reading signature-pack.json.
79-79:Set-Content -Encoding UTF8emits a UTF-8 BOM on Windows PowerShell 5.1.On Windows PowerShell 5.1,
-Encoding UTF8produces a UTF-8-with-BOM file. Downstream JSON consumers (schema validators,ConvertFrom-Jsonin other scripts, and Node.jsJSON.parse) may fail or produce unexpected results when a BOM prefix is present. PowerShell 7+ changed this behavior and no longer adds a BOM with-Encoding UTF8.♻️ Proposed fix — use .NET write to ensure BOM-less UTF-8
-($result | ConvertTo-Json -Depth 12) | Set-Content -Path $OutputPath -Encoding UTF8 +$json = $result | ConvertTo-Json -Depth 12 +[System.IO.File]::WriteAllText($OutputPath, $json, [System.Text.UTF8Encoding]::new($false))This approach works consistently across PowerShell 5.1 and 7+.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/research/generate-signature-candidates.ps1` at line 79, The current write uses Set-Content -Encoding UTF8 which emits a UTF-8 BOM on Windows PowerShell 5.1; replace the Set-Content call that writes ($result | ConvertTo-Json -Depth 12) to $OutputPath with a .NET write that uses a UTF8 encoding without BOM (e.g. call [System.IO.File]::WriteAllText($OutputPath, $jsonString, [System.Text.UTF8Encoding]::new($false))) so the JSON is written BOM-less and behaves consistently across PowerShell 5.1 and 7+.native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp (1)
27-29:lockEnabled_andlockedCreditsValue_are write-only — they are stored but never read.Both atomics are set in
execute()(Lines 31–32 ofEconomyPlugin.cpp) but consulted by neithercapabilitySnapshot()nor any other current method. If they represent future state that will be exposed later, add a brief comment to that effect; otherwise, removing them eliminates dead state and reduces the cognitive overhead of theexecute()atomic-store group.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp` around lines 27 - 29, The atomics lockEnabled_ and lockedCreditsValue_ in class EconomyPlugin are only written (in execute()) but never read (e.g., capabilitySnapshot() or any other method); either remove these unused members to eliminate dead state or add a clear comment above their declarations explaining they are reserved for future public state exposure and will be read by forthcoming methods, and ensure execute() remains the single writer; if you intend them to be consumed now, update capabilitySnapshot() or the appropriate accessor to read them instead.native/SwfocExtender.Bridge/src/BridgeHostMain.cpp (2)
33-60:EscapeJsonis duplicated fromNamedPipeBridgeServer.cpp.This is the same issue flagged in
NamedPipeBridgeServer.cpp. Both copies should be replaced by a shared internal utility header.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Bridge/src/BridgeHostMain.cpp` around lines 33 - 60, Replace the duplicated EscapeJson implementation in BridgeHostMain.cpp and NamedPipeBridgeServer.cpp with a single shared internal utility: create an internal header (e.g., InternalJsonUtils.h) that declares EscapeJson and provide a single inline/inline-static implementation or a .cpp implementation (to avoid ODR) that both BridgeHostMain.cpp and NamedPipeBridgeServer.cpp include; then remove the local EscapeJson definitions from both files and update includes to use the new header so only one canonical EscapeJson (function name EscapeJson) is used across the codebase.
274-284: No POSIX signal handler —WaitForShutdownSignalloops forever on non-Windows.
g_runningis initializedtrueand only set tofalseinsideCtrlHandler, which is#if defined(_WIN32)only. On Linux/macOS (including WSL, which the PR explicitly mentions as a fallback target), there is noSIGINT/SIGTERMhandler, so the process can only be killed withSIGKILL.🔧 Proposed fix — add POSIX signal handler
`#if` defined(_WIN32) `#define` NOMINMAX `#include` <windows.h> `#endif` + +#if !defined(_WIN32) +#include <csignal> +#endifvoid InstallCtrlHandler() { `#if` defined(_WIN32) SetConsoleCtrlHandler(CtrlHandler, TRUE); +#else + auto posixHandler = [](int) { g_running.store(false); }; + std::signal(SIGINT, posixHandler); + std::signal(SIGTERM, posixHandler); `#endif` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Bridge/src/BridgeHostMain.cpp` around lines 274 - 284, On POSIX the program never registers a SIGINT/SIGTERM handler so g_running stays true and WaitForShutdownSignal spins forever; add a POSIX signal handler that mirrors CtrlHandler behaviour (sets g_running=false) and register it from InstallCtrlHandler when not on _WIN32. Specifically, implement a function (e.g., PosixSignalHandler) that accepts the signal and sets g_running.store(false), and in InstallCtrlHandler's non-WIN32 branch use sigaction (or signal) to register handlers for SIGINT and SIGTERM so WaitForShutdownSignal can exit cleanly.native/SwfocExtender.Plugins/src/EconomyPlugin.cpp (1)
30-32: Three separate atomic stores are not atomic as a group — concurrent readers can observe a torn state.hookInstalled_.store(true); // observer sees "installed"... lockEnabled_.store(request.lockValue); // ...but lock state is still old lockedCreditsValue_.store(request.intValue);If
execute()is ever called concurrently with a future snapshot or diagnostic reader that inspects more thanhookInstalled_, an intermediate view (hookInstalled_=true,lockEnabled_=false,lockedCreditsValue_=0) is observable. Grouping the three values in a single struct and replacing the atomics with astd::mutex-guarded assignment (or anstd::atomic<std::shared_ptr<StateSnapshot>>) closes the gap and makes the invariant explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Plugins/src/EconomyPlugin.cpp` around lines 30 - 32, Multiple independent atomic stores (hookInstalled_, lockEnabled_, lockedCreditsValue_) can produce torn intermediate states visible to concurrent readers (e.g., execute()); fix by grouping these fields into a single snapshot and performing an atomic swap of the snapshot or a mutex-guarded assignment. Define a StateSnapshot struct containing hookInstalled, lockEnabled and lockedCreditsValue, replace the three atomics with either a std::atomic<std::shared_ptr<StateSnapshot>> statePtr_ and publish a new snapshot with statePtr_.store(std::make_shared<StateSnapshot>{...}) in the place where hookInstalled_/lockEnabled_/lockedCreditsValue_ were set, or protect a single non-atomic StateSnapshot with a std::mutex and assign/read under the mutex; update execute() and any readers to load the shared_ptr (or lock the mutex) and read the consistent snapshot.native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp (1)
89-116:EscapeJsonis copy-pasted verbatim intoBridgeHostMain.cpp.Both translation units define an identical
EscapeJsonin their anonymous namespaces. Promote it to a shared internal utility header (e.g.,native/SwfocExtender.Bridge/src/JsonUtils.hpp) and include it from both.cppfiles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 89 - 116, There are duplicate definitions of EscapeJson in anonymous namespaces across two translation units; extract that implementation into a shared internal header (e.g., JsonUtils.hpp) declaring an inline EscapeJson(const std::string&) function (or place it in a unique internal namespace) with include guards/pragma once, remove the duplicate definitions from NamedPipeBridgeServer.cpp and BridgeHostMain.cpp, and `#include` the new header from both .cpp files so they share the single implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp`:
- Around line 132-143: The pipe is created in CreateBridgePipe with message-mode
flags which causes ReadCommandLine to silently drop messages larger than
kPipeBufferSize; change the pipe to byte mode by removing PIPE_TYPE_MESSAGE and
PIPE_READMODE_MESSAGE (use byte mode) in CreateBridgePipe, and/or update
ReadCommandLine to explicitly loop and handle ERROR_MORE_DATA from ReadFile
(accumulate partial reads until no more data) so commands > kPipeBufferSize are
correctly appended; apply the same change for the other CreateNamedPipe usage
referenced in the diff (the 152-166 area) and ensure ReadCommandLine still
relies on '\n' framing after switching to byte mode.
- Around line 32-87: ExtractStringValue and ExtractObjectJson are fragile by
manually scanning for quotes/braces and fail on escaped quotes or braces inside
strings; replace both helpers with a proper JSON parse using a real parser
(e.g., include the nlohmann::json single-header), parse the incoming json into a
nlohmann::json object, then retrieve string fields with
at(key).get<std::string>() for commandId/featureId and retrieve payloadJson by
serializing the nested object (or using dump()) and handle parse errors by
catching nlohmann::json::parse_error to return empty/default values; update any
callers that used ExtractStringValue/ExtractObjectJson to use the new
parsed-object approach (functions: ExtractStringValue, ExtractObjectJson) so
escaped characters and strings are handled correctly.
In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`:
- Around line 22-25: Replace the C++17 attribute [[maybe_unused]] on the struct
members available, state (type CapabilityState) and reasonCode with cppcheck
suppressions so Codacy stops flagging unusedStructMember: either put an inline
comment suppression for cppcheck (e.g. a cppcheck-suppress unusedStructMember
comment) next to each member declaration in CapabilityProbe.hpp, or add a
file-level directive (cppcheck-suppress-file unusedStructMember) alongside the
existing missingIncludeSystem suppression at the top of the file; ensure the
symbols available, state and reasonCode remain unchanged except for replacing
the attribute with the cppcheck-compatible comment.
In `@tools/run-live-validation.ps1`:
- Around line 445-456: The current hard-gate block using
$RequireNonBlockedClassification overwrites the original exception in
$fatalError when bundle generation/validation fails and $bundleClassification is
empty; modify the logic around the $RequireNonBlockedClassification check to
preserve the original exception by wrapping it as an inner/inner-like exception
or appending its message to the new InvalidOperationException (e.g., when
setting $fatalError keep the original exception object and include its
Message/ToString() in the new exception), and change the blocked-value test from
an explicit list ($bundleClassification -in
@("blocked_environment","blocked_profile_mismatch")) to a prefix check (e.g.,
test whether $bundleClassification -like "blocked_*" or use -match '^blocked_')
so future blocked_* classifications are caught; ensure the final throw
$fatalError still carries the original failure details.
---
Duplicate comments:
In
`@native/SwfocExtender.Bridge/include/swfoc_extender/bridge/NamedPipeBridgeServer.hpp`:
- Around line 37-56: The class NamedPipeBridgeServer lacks a destructor so its
member std::thread worker_ can call std::terminate() if still joinable; add a
destructor ~NamedPipeBridgeServer() that ensures the worker is stopped and
joined: call stop() (or otherwise clear running_) and then if
(worker_.joinable()) worker_.join(); locate the class declaration/implementation
for NamedPipeBridgeServer and implement the destructor there to safely shut down
the runLoop and join worker_ to avoid terminate on destruction.
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp`:
- Around line 208-236: stop() can deadlock because the wake-up CreateFileA trick
may fail; instead, abort the worker's blocking ConnectNamedPipe by
cancelling/closing the server pipe handle directly. Expose or store the server
pipe HANDLE used by the worker (e.g., serverHandle_ created in the worker where
CreateNamedPipe/ConnectNamedPipe is called), and in
NamedPipeBridgeServer::stop() after setting running_ to false call
CancelIoEx(serverHandle_, nullptr) (or CancelIo on older targets) and
CloseHandle(serverHandle_) if handle is valid before joining worker_; keep the
existing CreateFileA wake attempt as a fallback but ensure
CancelIoEx/CloseHandle always runs so worker_.join() cannot deadlock.
In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`:
- Around line 27-37: The class CapabilityProbe exposes an unprotected mutable
unordered_map capabilities_ causing data races: protect all accesses in
markAvailable, isAvailable, and snapshot by adding a synchronization primitive
(e.g., std::mutex or std::shared_mutex) and locking appropriately (exclusive
lock in markAvailable, shared/read lock in isAvailable) and change snapshot() to
return a copy of the map (return by value) instead of a const reference to avoid
iterator/rehash invalidation; update the methods (markAvailable, isAvailable,
snapshot) to acquire the correct lock around any use of capabilities_ to ensure
thread-safety.
In
`@native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp`:
- Around line 27-38: hooks_ is accessed by concurrent readers/writers but
HookLifecycleManager has no synchronization; add a mutable std::shared_mutex
member (e.g., hooks_mutex_) to the class and protect writer methods
markInstalled, markFailed, markRolledBack with
std::unique_lock<std::shared_mutex> and protect the const accessor get with
std::shared_lock<std::shared_mutex> so reads can run concurrently while writes
are exclusive.
In `@tools/native/build-native.ps1`:
- Around line 1-5: The script is missing a -DryRun switch and a comment-based
help block; add a [switch]$DryRun parameter to the param() list and thread it
into the existing Invoke-WindowsBuild and Invoke-WslBuild calls so those
functions gate cmake/wsl.exe invocations and artifact copy operations when
$DryRun is set, and add a PowerShell comment-based help block (.SYNOPSIS,
.DESCRIPTION, .NOTES, .EXAMPLE) at the top of the file documenting Mode,
Configuration, BuildDir and the DryRun smoke path and the Windows live
assumptions; ensure the help block and the new DryRun param name exactly match
(DryRun) so callers and help lookups work consistently.
- Around line 160-174: The WSL path conversion in Invoke-WslBuild is broken
because Replace("\\", "/") doesn't convert normal Windows backslashes or
translate drive letters; replace the current $linuxScript/$linuxRoot Replace
calls with calls to wsl.exe wslpath -u to convert each Windows path into a
proper WSL path (run wsl.exe -- wslpath -u "<windowsPath>" and capture/Trim()
the output), then use those converted paths when building $quotedCommand; also
check the wslpath call exit/output and throw a clear error if conversion fails
so the later bash invocation gets valid /mnt/... paths.
In `@tools/research/generate-signature-candidates.ps1`:
- Around line 53-66: $anchorIds is computed but never used; add a
cross-validation step after building $operations that iterates the keys in
$operations (e.g., $operations.Keys or each
value.requiredAnchors/optionalAnchors) and verifies every anchor name exists in
$anchorIds, and if any are missing emit an explicit failure (non-zero exit) with
a clear classification code and message identifying the missing anchors and the
operation (use the symbols $anchorIds and $operations to locate code); ensure
missing artifacts are treated as explicit failures (do not silently skip) and
return an error code per the project's classification guideline.
- Around line 29-36: The output directory creation uses
TestResults/research/$RunId which breaks the per-run isolation rule; update the
branch that sets $outputDir when [string]::IsNullOrWhiteSpace($OutputPath) so it
builds TestResults/runs/$RunId/research (e.g. use Join-Path with
"TestResults/runs/$RunId/research") before creating the directory and assigning
$OutputPath; keep references to $OutputPath, $outputDir and $RunId when
adjusting the path and ensure the else branch behavior (creating parent dir of
$OutputPath) remains unchanged.
- Around line 1-6: Add a deterministic smoke/example invocation to the script
header (a "# Example:" line showing a reproducible invocation that uses the
repo's deterministic fixture for this tool) and replace the two plain-string
throw usages in this file with machine-readable failure messages that include
explicit classification codes; locate the throws (the two "throw" statements in
the script) and change them to include a stable code prefix or structured
payload (e.g., CODE:message or a small structured object) such as an ERR_* code
for missing fixture/invalid input so the failures are parseable by automation
and match the project's classification guidelines.
In `@tools/run-live-validation.ps1`:
- Line 127: The Invoke-LiveTest function emits status with Write-Output which
writes to the success stream and contaminates its return value; change the call
that currently uses Write-Output "=== Running $Name ===" back to a
non-success-stream logger (e.g., Write-Host or Write-Verbose/Write-Information
depending on verbosity policy) so Invoke-LiveTest only returns the TRX path
string; ensure the exact string "=== Running $Name ===" is no longer emitted to
pipeline output and keep the surrounding logic in Invoke-LiveTest unchanged.
---
Nitpick comments:
In `@native/SwfocExtender.Bridge/src/BridgeHostMain.cpp`:
- Around line 33-60: Replace the duplicated EscapeJson implementation in
BridgeHostMain.cpp and NamedPipeBridgeServer.cpp with a single shared internal
utility: create an internal header (e.g., InternalJsonUtils.h) that declares
EscapeJson and provide a single inline/inline-static implementation or a .cpp
implementation (to avoid ODR) that both BridgeHostMain.cpp and
NamedPipeBridgeServer.cpp include; then remove the local EscapeJson definitions
from both files and update includes to use the new header so only one canonical
EscapeJson (function name EscapeJson) is used across the codebase.
- Around line 274-284: On POSIX the program never registers a SIGINT/SIGTERM
handler so g_running stays true and WaitForShutdownSignal spins forever; add a
POSIX signal handler that mirrors CtrlHandler behaviour (sets g_running=false)
and register it from InstallCtrlHandler when not on _WIN32. Specifically,
implement a function (e.g., PosixSignalHandler) that accepts the signal and sets
g_running.store(false), and in InstallCtrlHandler's non-WIN32 branch use
sigaction (or signal) to register handlers for SIGINT and SIGTERM so
WaitForShutdownSignal can exit cleanly.
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp`:
- Around line 89-116: There are duplicate definitions of EscapeJson in anonymous
namespaces across two translation units; extract that implementation into a
shared internal header (e.g., JsonUtils.hpp) declaring an inline
EscapeJson(const std::string&) function (or place it in a unique internal
namespace) with include guards/pragma once, remove the duplicate definitions
from NamedPipeBridgeServer.cpp and BridgeHostMain.cpp, and `#include` the new
header from both .cpp files so they share the single implementation.
In `@native/SwfocExtender.Core/src/CapabilityProbe.cpp`:
- Around line 5-11: The current markAvailable(const std::string& featureId,
const std::string& reasonCode) always sets entry.state =
CapabilityState::Verified making CapabilityState::Experimental unreachable;
change the API to allow callers to set the state by adding an optional state
parameter (e.g., add a third parameter with default CapabilityState::Verified)
in the declaration in CapabilityProbe.hpp and update the implementation in
CapabilityProbe.cpp to use that parameter when assigning entry.state (leave
CapabilityEntry, capabilities_ and snapshot() behavior unchanged);
alternatively, if Experimental is not needed remove the enumerator and any
references—otherwise add a new markExperimental helper that sets entry.state =
CapabilityState::Experimental and sets capabilities_[featureId] accordingly.
In
`@native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp`:
- Around line 27-29: The atomics lockEnabled_ and lockedCreditsValue_ in class
EconomyPlugin are only written (in execute()) but never read (e.g.,
capabilitySnapshot() or any other method); either remove these unused members to
eliminate dead state or add a clear comment above their declarations explaining
they are reserved for future public state exposure and will be read by
forthcoming methods, and ensure execute() remains the single writer; if you
intend them to be consumed now, update capabilitySnapshot() or the appropriate
accessor to read them instead.
In `@native/SwfocExtender.Plugins/src/EconomyPlugin.cpp`:
- Around line 30-32: Multiple independent atomic stores (hookInstalled_,
lockEnabled_, lockedCreditsValue_) can produce torn intermediate states visible
to concurrent readers (e.g., execute()); fix by grouping these fields into a
single snapshot and performing an atomic swap of the snapshot or a mutex-guarded
assignment. Define a StateSnapshot struct containing hookInstalled, lockEnabled
and lockedCreditsValue, replace the three atomics with either a
std::atomic<std::shared_ptr<StateSnapshot>> statePtr_ and publish a new snapshot
with statePtr_.store(std::make_shared<StateSnapshot>{...}) in the place where
hookInstalled_/lockEnabled_/lockedCreditsValue_ were set, or protect a single
non-atomic StateSnapshot with a std::mutex and assign/read under the mutex;
update execute() and any readers to load the shared_ptr (or lock the mutex) and
read the consistent snapshot.
In `@tools/research/generate-signature-candidates.ps1`:
- Line 74: sourceProfilePath currently uses
[System.IO.Path]::GetFullPath($ProfilePath) which bakes an absolute,
machine-specific path into signature-pack.json; change it to compute and store a
repository-relative path instead (e.g., use
[System.IO.Path]::GetRelativePath($RepoRoot, $ProfilePath) or compute a relative
URI from $RepoRoot/$PSScriptRoot to $ProfilePath) so artifacts are portable and
don't leak CI/dev paths; update the assignment to sourceProfilePath to produce
the repo-relative string and ensure downstream consumers still resolve it
relative to the repository root when reading signature-pack.json.
- Line 79: The current write uses Set-Content -Encoding UTF8 which emits a UTF-8
BOM on Windows PowerShell 5.1; replace the Set-Content call that writes ($result
| ConvertTo-Json -Depth 12) to $OutputPath with a .NET write that uses a UTF8
encoding without BOM (e.g. call [System.IO.File]::WriteAllText($OutputPath,
$jsonString, [System.Text.UTF8Encoding]::new($false))) so the JSON is written
BOM-less and behaves consistently across PowerShell 5.1 and 7+.
| std::string ExtractStringValue(const std::string& json, const std::string& key) { | ||
| const auto quotedKey = "\"" + key + "\""; | ||
| auto keyPos = json.find(quotedKey); | ||
| if (keyPos == std::string::npos) { | ||
| return {}; | ||
| } | ||
|
|
||
| auto colonPos = json.find(':', keyPos + quotedKey.length()); | ||
| if (colonPos == std::string::npos) { | ||
| return {}; | ||
| } | ||
|
|
||
| auto firstQuote = json.find('"', colonPos + 1); | ||
| if (firstQuote == std::string::npos) { | ||
| return {}; | ||
| } | ||
|
|
||
| auto secondQuote = json.find('"', firstQuote + 1); | ||
| if (secondQuote == std::string::npos || secondQuote <= firstQuote) { | ||
| return {}; | ||
| } | ||
|
|
||
| return json.substr(firstQuote + 1, secondQuote - firstQuote - 1); | ||
| } | ||
|
|
||
| std::string ExtractObjectJson(const std::string& json, const std::string& key) { | ||
| const auto quotedKey = "\"" + key + "\""; | ||
| auto keyPos = json.find(quotedKey); | ||
| if (keyPos == std::string::npos) { | ||
| return "{}"; | ||
| } | ||
|
|
||
| auto colonPos = json.find(':', keyPos + quotedKey.length()); | ||
| if (colonPos == std::string::npos) { | ||
| return "{}"; | ||
| } | ||
|
|
||
| auto openBrace = json.find('{', colonPos + 1); | ||
| if (openBrace == std::string::npos) { | ||
| return "{}"; | ||
| } | ||
|
|
||
| auto depth = 0; | ||
| for (std::size_t i = openBrace; i < json.size(); ++i) { | ||
| if (json[i] == '{') { | ||
| ++depth; | ||
| } else if (json[i] == '}') { | ||
| --depth; | ||
| if (depth == 0) { | ||
| return json.substr(openBrace, i - openBrace + 1); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return "{}"; | ||
| } |
There was a problem hiding this comment.
Naive JSON helpers are fragile: escaped quotes break ExtractStringValue, braces inside strings corrupt ExtractObjectJson.
ExtractStringValue (Lines 44–54) finds the closing delimiter with a bare json.find('"', firstQuote + 1), so a value like "foo\"bar" is truncated to "foo\" — the backslash is not treated as an escape sequence.
ExtractObjectJson (Lines 69–86) counts {/} bytes without skipping string content, so {"key":"val{ue"} inflates the depth count and returns the wrong span.
Both helpers are used for commandId, featureId, and payloadJson extraction. While the current IPC payloads are generated by trusted C# test code (making exploitation unlikely), invalid intermediate state during any future free-form payload expansion could cause silent misrouting. Consider using a minimal real JSON parser (e.g., nlohmann/json single-header) for robustness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 32 -
87, ExtractStringValue and ExtractObjectJson are fragile by manually scanning
for quotes/braces and fail on escaped quotes or braces inside strings; replace
both helpers with a proper JSON parse using a real parser (e.g., include the
nlohmann::json single-header), parse the incoming json into a nlohmann::json
object, then retrieve string fields with at(key).get<std::string>() for
commandId/featureId and retrieve payloadJson by serializing the nested object
(or using dump()) and handle parse errors by catching
nlohmann::json::parse_error to return empty/default values; update any callers
that used ExtractStringValue/ExtractObjectJson to use the new parsed-object
approach (functions: ExtractStringValue, ExtractObjectJson) so escaped
characters and strings are handled correctly.
| #if defined(_WIN32) | ||
| HANDLE CreateBridgePipe(const std::string& fullPipeName) { | ||
| return CreateNamedPipeA( | ||
| fullPipeName.c_str(), | ||
| PIPE_ACCESS_DUPLEX, | ||
| PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT, | ||
| PIPE_UNLIMITED_INSTANCES, | ||
| kPipeBufferSize, | ||
| kPipeBufferSize, | ||
| 0, | ||
| nullptr); | ||
| } |
There was a problem hiding this comment.
ReadCommandLine silently truncates any command larger than 16 KB due to message-mode pipe semantics.
The pipe is created with PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE (Line 137). In message mode, when the payload exceeds kPipeBufferSize, ReadFile returns FALSE with GetLastError() == ERROR_MORE_DATA — causing the while (ReadFile(...) && bytesRead > 0) condition to short-circuit before any append. The result is an empty commandLine, producing a spurious "missing commandId" error response.
The simplest fix is to switch the pipe to byte mode since \n-framing is already used:
🔧 Proposed fix — byte-mode pipe + explicit `ERROR_MORE_DATA` loop
- PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
+ PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,Alternatively, keep message mode and handle ERROR_MORE_DATA inside ReadCommandLine:
std::string ReadCommandLine(HANDLE pipe, std::array<char, kPipeBufferSize>& buffer) {
std::string commandLine;
DWORD bytesRead = 0;
- while (ReadFile(pipe, buffer.data(), static_cast<DWORD>(buffer.size()), &bytesRead, nullptr) && bytesRead > 0) {
+ BOOL ok = FALSE;
+ while ((ok = ReadFile(pipe, buffer.data(), static_cast<DWORD>(buffer.size()), &bytesRead, nullptr)) || GetLastError() == ERROR_MORE_DATA) {
+ if (bytesRead == 0) break;
commandLine.append(buffer.data(), bytesRead);Also applies to: 152-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 132 -
143, The pipe is created in CreateBridgePipe with message-mode flags which
causes ReadCommandLine to silently drop messages larger than kPipeBufferSize;
change the pipe to byte mode by removing PIPE_TYPE_MESSAGE and
PIPE_READMODE_MESSAGE (use byte mode) in CreateBridgePipe, and/or update
ReadCommandLine to explicitly loop and handle ERROR_MORE_DATA from ReadFile
(accumulate partial reads until no more data) so commands > kPipeBufferSize are
correctly appended; apply the same change for the other CreateNamedPipe usage
referenced in the diff (the 152-166 area) and ensure ReadCommandLine still
relies on '\n' framing after switching to byte mode.
native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp
Show resolved
Hide resolved
| if ($RequireNonBlockedClassification) { | ||
| if (-not $EmitReproBundle) { | ||
| $fatalError = [InvalidOperationException]::new("-RequireNonBlockedClassification requires repro-bundle emission.") | ||
| } | ||
| elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) { | ||
| $fatalError = [InvalidOperationException]::new("Hard gate could not determine repro-bundle classification.") | ||
| } | ||
| elseif ($bundleClassification -in @("blocked_environment", "blocked_profile_mismatch")) { | ||
| $fatalError = [InvalidOperationException]::new( | ||
| "Hard gate failed: classification '$bundleClassification' is blocked. Launch with correct process context and rerun.") | ||
| } | ||
| } |
There was a problem hiding this comment.
Root-cause exception is silently replaced when bundle generation fails and the hard gate is active
When collect-mod-repro-bundle.ps1 or validate-repro-bundle.ps1 throws, the catch block at line 441 sets $fatalError to the original exception. Control then falls into the $RequireNonBlockedClassification block; because $bundleClassification is still "", the IsNullOrWhiteSpace branch (line 449–451) overwrites $fatalError with a generic "Hard gate could not determine repro-bundle classification" — discarding the original root-cause exception entirely. Only a Write-Warning message survives. The final throw $fatalError at line 567 then throws the gate error, making the actual failure opaque.
Consider preserving the original exception either as an inner exception or by incorporating it in the gate error's message:
🛡️ Proposed fix
- elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
- $fatalError = [InvalidOperationException]::new("Hard gate could not determine repro-bundle classification.")
+ elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
+ $rootCause = if ($null -ne $fatalError) { " Root cause: $($fatalError.Message)" } else { "" }
+ $fatalError = [InvalidOperationException]::new(
+ "Hard gate could not determine repro-bundle classification.$rootCause")
}Hardcoded blocked-value list will silently miss future schema values
The gate only checks "blocked_environment" and "blocked_profile_mismatch". If the repro-bundle schema introduces additional "blocked_*" classification values, they will pass the gate undetected. A prefix check is more future-proof:
♻️ Proposed refactor
- elseif ($bundleClassification -in @("blocked_environment", "blocked_profile_mismatch")) {
+ elseif ($bundleClassification.StartsWith("blocked_", [System.StringComparison]::OrdinalIgnoreCase)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/run-live-validation.ps1` around lines 445 - 456, The current hard-gate
block using $RequireNonBlockedClassification overwrites the original exception
in $fatalError when bundle generation/validation fails and $bundleClassification
is empty; modify the logic around the $RequireNonBlockedClassification check to
preserve the original exception by wrapping it as an inner/inner-like exception
or appending its message to the new InvalidOperationException (e.g., when
setting $fatalError keep the original exception object and include its
Message/ToString() in the new exception), and change the blocked-value test from
an explicit list ($bundleClassification -in
@("blocked_environment","blocked_profile_mismatch")) to a prefix check (e.g.,
test whether $bundleClassification -like "blocked_*" or use -match '^blocked_')
so future blocked_* classifications are caught; ensure the final throw
$fatalError still carries the original failure details.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5dd08b1fbc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| $backendRouteDecision = if ($null -ne $runtimeEvidence) { | ||
| [ordered]@{ | ||
| backend = if ([string]::IsNullOrWhiteSpace([string]$runtimeEvidence.result.backendRoute)) { "unknown" } else { [string]$runtimeEvidence.result.backendRoute } | ||
| allowed = [bool]$runtimeEvidence.result.succeeded |
There was a problem hiding this comment.
Derive route allowance from route diagnostics
backendRouteDecision.allowed is currently tied to result.succeeded, which conflates backend routing with final action outcome. If routing succeeds but the action later fails (for example due to payload/readback/runtime errors), the repro bundle will incorrectly report the route as disallowed, producing misleading evidence for hard-gate triage and reason-code analysis.
Useful? React with 👍 / 👎.
| $hookInstallReport = if ($null -ne $runtimeEvidence) { | ||
| [ordered]@{ | ||
| state = if ([string]::IsNullOrWhiteSpace([string]$runtimeEvidence.result.hookState)) { "unknown" } else { [string]$runtimeEvidence.result.hookState } | ||
| reasonCode = if ([bool]$runtimeEvidence.result.succeeded) { "CAPABILITY_PROBE_PASS" } else { "HOOK_INSTALL_FAILED" } |
There was a problem hiding this comment.
Stop inferring hook install failure from action success
This assigns HOOK_INSTALL_FAILED whenever the action result is unsuccessful, even if hook installation actually succeeded and the failure happened later in execution. In runs where set_credits fails after routing (for example due to write/readback issues), the bundle will incorrectly classify hook lifecycle health, which undermines the reliability evidence contract.
Useful? React with 👍 / 👎.



User description
Summary
rd/deep-re-universal-sdkto proven runtime/extender core only.set_creditswith extender-first execution.Affected Profiles
base_swfocaotr_1397421866_swfocroe_3447786229_swfocEvidence
20260218-213838passedTestResults/runs/20260218-213838/repro-bundle.jsonTestResults/runs/20260218-213838/repro-bundle.mdReason-Code Diagnostics
selectedHostProcess.name=StarWarsG.exebackendRouteDecision.backend=ExtenderbackendRouteDecision.reasonCode=CAPABILITY_PROBE_PASScapabilityProbeSnapshot.probeReasonCode=CAPABILITY_PROBE_PASShookInstallReport.state=HOOK_ONESHOTValidation
dotnet restore SwfocTrainer.slndotnet build SwfocTrainer.sln -c Releasedotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests"powershell.exe -File tools/validate-policy-contracts.ps1powershell.exe -File tools/validate-binary-fingerprint.ps1 -Strict -FingerprintPath tools/fixtures/binary_fingerprint_sample.jsonpowershell.exe -File tools/validate-signature-pack.ps1 -Strict -SignaturePackPath tools/fixtures/signature_pack_sample.jsonpowershell.exe -File tools/run-live-validation.ps1 -Configuration Release -NoBuild -Scope ROE -Strict -RequireNonBlockedClassificationpowershell.exe -File tools/validate-repro-bundle.ps1 -Strict -BundlePath TestResults/runs/20260218-213838/repro-bundle.jsonPR Type
Enhancement, Tests
Description
Implements comprehensive SDK operation routing with fail-closed safety gates for extender backend execution
Adds universal profile variant resolution with confidence scoring and process detection heuristics
Introduces native extender bridge with named-pipe communication for capability probing and credits execution
Implements binary fingerprint capture and capability map resolution for deterministic operation availability
Adds backend routing decision logic with auto-fallback from extender to legacy paths based on capability state
Integrates runtime mode probing from symbol health indicators to disambiguate tactical vs galactic execution modes
Adds comprehensive test coverage for fingerprint service, capability maps, profile variants, and backend routing
Implements profile inheritance merging for backend preferences, host preferences, and capability requirements
Adds native Windows build orchestration with CMake generator auto-detection and Visual Studio resolution
Implements live validation hard gate with TRX polling and blocking classification detection
Updates profile format and architecture documentation for tiered backend routing and SDK execution paths
Adds schema definitions and sample fixtures for binary fingerprints, signature packs, and repro bundles
File Walkthrough
54 files
RuntimeAdapter.cs
Universal profile resolution and extender backend routing integrationsrc/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs
confidence tracking
fail-closed safety gates
execution mode
and module size metrics
validation
MainViewModel.cs
Universal profile UI integration and variant validationsrc/SwfocTrainer.App/ViewModels/MainViewModel.cs
display
operations
selection score information
NamedPipeExtenderBackend.cs
Named-pipe extender backend bridge implementationsrc/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs
process
extender
resolution
contracts
SdkOperationRouter.cs
SDK operation routing with capability validationsrc/SwfocTrainer.Core/Services/SdkOperationRouter.cs
contracts
mapping
BackendRouter.cs
Backend routing with fail-closed mutation safety gatessrc/SwfocTrainer.Runtime/Services/BackendRouter.cs
mutations
states
CapabilityMapResolver.cs
Capability map resolution from binary fingerprintssrc/SwfocTrainer.Runtime/Services/CapabilityMapResolver.cs
fingerprint-keyed maps
availability
ProfileVariantResolver.cs
Universal profile variant resolution servicesrc/SwfocTrainer.Runtime/Services/ProfileVariantResolver.cs
process detection
SdkCapabilityModels.cs
SDK capability and profile variant model definitionssrc/SwfocTrainer.Core/Models/SdkCapabilityModels.cs
SdkCapabilityStatus,CapabilityReasonCode, and anchor-relatedtypes
BinaryFingerprint,CapabilityMap, andProfileVariantResolutionrecordsSdkOperationRequestandSdkOperationResultenvelopesRuntimeModeProbeResolver.cs
Runtime mode probing from symbol health indicatorssrc/SwfocTrainer.Runtime/Services/RuntimeModeProbeResolver.cs
health
BinaryFingerprintService.cs
Binary fingerprint capture and identification servicesrc/SwfocTrainer.Runtime/Services/BinaryFingerprintService.cs
ProcessLocator.cs
Process locator host role and module size detectionsrc/SwfocTrainer.Runtime/Services/ProcessLocator.cs
match count
fields
RuntimeReasonCode.cs
Runtime reason code taxonomy for diagnosticssrc/SwfocTrainer.Core/Models/RuntimeReasonCode.cs
mode, hook, helper, safety, and rollback operations
and execution gating
SdkOperationCatalog.cs
SDK operation catalog and definition modelssrc/SwfocTrainer.Core/Models/SdkOperationCatalog.cs
SdkOperationCatalogstatic class defining canonical SDKoperations for research runtime
SdkOperationDefinitionsealed record with operationmetadata (id, mutation flag, allowed modes)
ReadOnly()andMutation()for operationdefinition creation
IsModeAllowed()methodBackendRoutingModels.cs
Backend routing and capability modelssrc/SwfocTrainer.Core/Models/BackendRoutingModels.cs
ExecutionBackendKindenum for backend identity (Extender,Helper, Memory, Save)
CapabilityConfidenceStateenum (Experimental, Verified)BackendCapability,CapabilityReport,BackendHealth,BackendRouteDecision,ExtenderCommand,ExtenderResultIsFeatureAvailable()andUnknown()factory
App.xaml.cs
SDK runtime service registrationsrc/SwfocTrainer.App/App.xaml.cs
IBinaryFingerprintService,ICapabilityMapResolver,ISdkExecutionGuardregistrationsISdkOperationRouter,IBackendRouter,IExecutionBackendRuntimeModeProbeResolversingleton
FileSystemProfileRepository.cs
Profile inheritance merging enhancementssrc/SwfocTrainer.Profiles/Services/FileSystemProfileRepository.cs
RequiredCapabilitiesandExperimentalFeaturesarrays in profile inheritance
BackendPreferenceandHostPreferencefields with child overrideprecedence
deduplication
ProfileValidator.cs
Profile backend and host preference validationsrc/SwfocTrainer.Profiles/Validation/ProfileValidator.cs
backendPreferencefield (allowed values: auto,extender, helper, memory)
hostPreferencefield (allowed values:starwarsg_preferred, any)
InvalidDataExceptionwith descriptive messages for invalidvalues
SdkExecutionGuard.cs
SDK execution guard implementationsrc/SwfocTrainer.Runtime/Services/SdkExecutionGuard.cs
ISdkExecutionGuardinterface with fail-closed executionpolicy
capability available
SdkExecutionDecisionwith reason codes and descriptivemessages
NoopSdkRuntimeAdapter.cs
No-op SDK runtime adapter stubsrc/SwfocTrainer.Runtime/Services/NoopSdkRuntimeAdapter.cs
ISdkRuntimeAdapteras no-op stub for research buildsOperationNotMappedreason codeICapabilityMapResolver.cs
Capability map resolver contractsrc/SwfocTrainer.Core/Contracts/ICapabilityMapResolver.cs
fingerprint-mapped anchors
ResolveAsync()method for capability resolution with anchorrequirements
ResolveDefaultProfileIdAsync()for fingerprint-based profileselection
IExecutionBackend.cs
Execution backend contractsrc/SwfocTrainer.Core/Contracts/IExecutionBackend.cs
checks
BackendKindproperty for backend identificationProfileModels.cs
Profile model backend and capability fieldssrc/SwfocTrainer.Core/Models/ProfileModels.cs
TrainerProfilerecord:BackendPreference,RequiredCapabilities,HostPreference,ExperimentalFeaturesBackendPreference="auto",HostPreference="starwarsg_preferred"RequiredCapabilitiesandExperimentalFeaturesRuntimeModels.cs
Process metadata host and scoring fieldssrc/SwfocTrainer.Core/Models/RuntimeModels.cs
ProcessMetadatarecord:HostRole,MainModuleSize,WorkshopMatchCount,SelectionScorescoring metrics
IProfileVariantResolver.cs
Profile variant resolver contractsrc/SwfocTrainer.Core/Contracts/IProfileVariantResolver.cs
(including
universal_auto)ResolveAsync()method with optional process list for variantresolution
Enums.cs
Execution kind and process host role enumssrc/SwfocTrainer.Core/Models/Enums.cs
Sdkvalue toExecutionKindenumProcessHostRoleenum with values: Unknown, Launcher,GameHost
NullSdkDiagnosticsSink.cs
Null SDK diagnostics sink implementationsrc/SwfocTrainer.Core/Services/NullSdkDiagnosticsSink.cs
ISdkDiagnosticsSinkas null-object pattern for SDKdiagnostics
WriteAsync()method for structured diagnostics recordsIBinaryFingerprintService.cs
Binary fingerprint service contractsrc/SwfocTrainer.Core/Contracts/IBinaryFingerprintService.cs
CaptureFromPathAsync()method with optional process IDparameter
ISdkOperationRouter.cs
SDK operation router contractsrc/SwfocTrainer.Core/Contracts/ISdkOperationRouter.cs
resolution and guards
ExecuteAsync()method for operation execution withcancellation support
ISdkRuntimeAdapter.cs
SDK runtime adapter contractsrc/SwfocTrainer.Core/Contracts/ISdkRuntimeAdapter.cs
in-process handling
ExecuteAsync()method for operation executionISdkDiagnosticsSink.cs
SDK diagnostics sink contractsrc/SwfocTrainer.Core/Contracts/ISdkDiagnosticsSink.cs
WriteAsync()method for diagnostics persistenceISdkExecutionGuard.cs
SDK execution guard contractsrc/SwfocTrainer.Core/Contracts/ISdkExecutionGuard.cs
CanExecute()method for capability-based execution decisionsIBackendRouter.cs
Backend router contractsrc/SwfocTrainer.Core/Contracts/IBackendRouter.cs
Resolve()method for routing based on request, profile,process, and capability context
collect-mod-repro-bundle.ps1
Repro bundle collection enhancementstools/collect-mod-repro-bundle.ps1
main module size capture
role, and module size
Get-ProfileRequiredCapabilities()andGet-RuntimeEvidence()helper functions
snapshot, hook install report, and overlay state
information
build-native.ps1
Native Windows build orchestration scripttools/native/build-native.ps1
resolution
detection
SwfocExtender.Host.exeartifact with recursive fallbackresolve-cmake.ps1
CMake and Visual Studio resolver scripttools/native/resolve-cmake.ps1
locations
scanning
run-live-validation.ps1
Live validation hard gate and TRX pollingtools/run-live-validation.ps1
SWFOC_LIVE_OUTPUT_DIRandSWFOC_LIVE_TEST_NAME-RequireNonBlockedClassificationhard gate for blockingclassifications
generate-signature-candidates.ps1
Signature candidate generation from fingerprinttools/research/generate-signature-candidates.ps1
definitions
flags
set_owner, etc.)
correlation
capture-binary-fingerprint.ps1
Binary fingerprint capture scripttools/research/capture-binary-fingerprint.ps1
version metadata
validate-signature-pack.ps1
Signature pack validation scripttools/validate-signature-pack.ps1
checks
definitions, and operations
run-capability-intel.ps1
Capability intelligence workflow orchestrationtools/research/run-capability-intel.ps1
signature generation, normalization
guidance
validate-binary-fingerprint.ps1
Binary fingerprint validation scripttools/validate-binary-fingerprint.ps1
checks
and module list
normalize-signature-pack.ps1
Signature pack normalization scripttools/research/normalize-signature-pack.ps1
output
BridgeHostMain.cpp
Native extender bridge host main entry point implementationnative/SwfocExtender.Bridge/src/BridgeHostMain.cpp
communication
health,probe_capabilities, andset_creditscommands with JSONpayload parsing
EconomyPluginfor credits execution with lock/one-shotsemantics
HookLifecycleManager.hpp
Hook lifecycle state management contract definitionnative/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp
NotInstalled,Installed,Failed,RolledBack)HookRecordstruct with state and reason codeHookLifecycleManager.cpp
Hook lifecycle manager implementationnative/SwfocExtender.Core/src/HookLifecycleManager.cpp
markInstalled,markFailed,markRolledBack)CapabilityProbe.hpp
Capability probe state model contract definitionnative/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp
Unknown,Experimental,Verified)CapabilityEntrystruct with availability and reason codeCapabilityProbe.cpp
Capability probe implementationnative/SwfocExtender.Core/src/CapabilityProbe.cpp
NamedPipeBridgeServer.hpp
Named-pipe bridge server contract definitionnative/SwfocExtender.Bridge/include/swfoc_extender/bridge/NamedPipeBridgeServer.hpp
BridgeCommandstruct with feature ID, profile ID, and JSONpayload
BridgeResultstruct with success status, reason code, anddiagnostics
PluginContracts.hpp
Plugin execution contract definitionsnative/SwfocExtender.Plugins/include/swfoc_extender/plugins/PluginContracts.hpp
PluginRequeststruct for feature execution with int/lockvalues
CapabilitySnapshotstruct for capability state reportingPluginResultstruct with diagnostics mapIPlugininterface for plugin implementationsEconomyPlugin.hpp
Economy plugin interface declarationnative/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp
EconomyPluginclass implementingIPlugininterfaceexecute()forset_creditsoperations with atomic statetracking
capabilitySnapshot()for credits availability reportingEconomyPlugin.cpp
Economy plugin set_credits implementationnative/SwfocExtender.Plugins/src/EconomyPlugin.cpp
set_creditsexecution with validation and lock/one-shotsemantics
VerifiedorExperimentalstateOverlayState.hpp
Overlay state model contract definitionnative/SwfocExtender.Overlay/include/swfoc_extender/overlay/OverlayState.hpp
OverlayState.cpp
Overlay state implementationnative/SwfocExtender.Overlay/src/OverlayState.cpp
universal_auto.json
Universal auto profile for variant resolutionprofiles/default/profiles/universal_auto.json
extenderbackendpreference
12 files
BackendRouterTests.cs
Backend router decision logic test coveragetests/SwfocTrainer.Tests/Runtime/BackendRouterTests.cs
legacy paths
NamedPipeExtenderBackendTests.cs
Named-pipe extender backend communication teststests/SwfocTrainer.Tests/Runtime/NamedPipeExtenderBackendTests.cs
SdkOperationRouterTests.cs
SDK operation router feature flag and mode validation teststests/SwfocTrainer.Tests/Core/SdkOperationRouterTests.cs
validation
resolution
CapabilityMapResolverTests.cs
Capability map resolver anchor validation teststests/SwfocTrainer.Tests/Runtime/CapabilityMapResolverTests.cs
ProfileVariantResolverTests.cs
Profile variant resolver detection and fallback teststests/SwfocTrainer.Tests/Runtime/ProfileVariantResolverTests.cs
LiveRoeRuntimeHealthTests.cs
Live ROE runtime health test evidence capturetests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs
test
diagnostics
RuntimeModeProbeResolverTests.cs
Runtime mode probe resolver symbol-based detection teststests/SwfocTrainer.Tests/Runtime/RuntimeModeProbeResolverTests.cs
ProfileValidatorTests.cs
Profile validator unit teststests/SwfocTrainer.Tests/Profiles/ProfileValidatorTests.cs
ProfileValidator.Validate()methodbackendPreferenceandhostPreferencefields withinvalid values
required capabilities
BuildProfile()helper for test profileconstruction
SdkExecutionGuardTests.cs
SDK execution guard unit teststests/SwfocTrainer.Tests/Runtime/SdkExecutionGuardTests.cs
SdkExecutionGuard.CanExecute()method for capability-basedexecution decisions
CapabilityResolutionResulttest fixturesBinaryFingerprintServiceTests.cs
Binary fingerprint service unit teststests/SwfocTrainer.Tests/Runtime/BinaryFingerprintServiceTests.cs
BinaryFingerprintService.CaptureFromPathAsync()for stablefingerprint generation
hash, fingerprint ID
ProfileInheritanceTests.cs
Profile inheritance test enhancementstests/SwfocTrainer.Tests/Profiles/ProfileInheritanceTests.cs
HostPreference,BackendPreference,RequiredCapabilitiesProfiles"
universal_autoprofile to expected manifest listProfileActionCatalogTests.cs
SDK routing validation for credits actiontests/SwfocTrainer.Tests/Profiles/ProfileActionCatalogTests.cs
SwfocProfiles_Should_Route_SetCredits_Via_Sdk()validatingSDK routing for credits writes
set_creditsaction usesExecutionKind.Sdkfor SWFOC profilesroe_3447786229_swfoc
18 files
CMakeLists.txt
Native extender CMake root configurationnative/CMakeLists.txt
Plugins
CMakeLists.txt
Core library CMake configurationnative/SwfocExtender.Core/CMakeLists.txt
sources
CMakeLists.txt
Bridge library and host executable CMake configurationnative/SwfocExtender.Bridge/CMakeLists.txt
CMakeLists.txt
Plugins library CMake configurationnative/SwfocExtender.Plugins/CMakeLists.txt
CMakeLists.txt
Overlay library CMake configurationnative/SwfocExtender.Overlay/CMakeLists.txt
bootstrap-wsl-toolchain.sh
WSL native toolchain bootstrap scripttools/native/bootstrap-wsl-toolchain.sh
tools/native/.localif sudounavailable
build-native.sh
WSL native build scripttools/native/build-native.sh
repro-bundle.schema.json
Repro bundle schema updates for backend routing and host selectiontools/schemas/repro-bundle.schema.json
selectedHostProcessschema with host ranking fieldsbackendRouteDecisionschema with backend and reason codecapabilityProbeSnapshotschema with capability count and requiredcapabilities
hookInstallReportschema with state and reason codeoverlayStateschema with availability and visibility flagsrepro_bundle_sample.json
Repro bundle sample fixture with vNext sectionstools/fixtures/repro_bundle_sample.json
selectedHostProcesswith host ranking fieldsbackendRouteDecisionwith memory fallback routecapabilityProbeSnapshotwith extender backendhookInstallReportwith unknown stateoverlayStatewith unavailable statusbinary-fingerprint.schema.json
Binary fingerprint schema definitiontools/schemas/binary-fingerprint.schema.json
timestamp
signature-pack.schema.json
Signature pack schema definitiontools/schemas/signature-pack.schema.json
binary_fingerprint_sample.json
Binary fingerprint sample fixturetools/fixtures/binary_fingerprint_sample.json
signature_pack_sample.json
Signature pack sample fixturetools/fixtures/signature_pack_sample.json
mappings
base_sweaw.json
Base SWEAW profile backend routing configurationprofiles/default/profiles/base_sweaw.json
backendPreference,requiredCapabilities,hostPreference,experimentalFeaturesfieldsbase_swfoc.json
Base SWFOC profile extender routing and SDK execution configurationprofiles/default/profiles/base_swfoc.json
autopreference andstarwarsg_preferredhost selectionset_creditsto required capabilitiesset_creditsexecution kind fromMemorytoSdkaotr_1397421866_swfoc.json
AOTR profile backend routing configurationprofiles/default/profiles/aotr_1397421866_swfoc.json
autopreference andstarwarsg_preferredhost selectionroe_3447786229_swfoc.json
ROE profile backend routing configurationprofiles/default/profiles/roe_3447786229_swfoc.json
autopreference andstarwarsg_preferredhost selectionmanifest.json
Profile manifest update for universal auto profileprofiles/default/manifest.json
universal_autoprofile entry to manifest with R&D version tag11 files
README.md
Native extender documentation and build guidenative/README.md
layers)
EXTERNAL_TOOLS_SETUP.md
Native extender toolchain setup documentationdocs/EXTERNAL_TOOLS_SETUP.md
resolve-cmake.ps1for WindowsARCHITECTURE.md
Architecture documentation for tiered backend routing and extenderpathdocs/ARCHITECTURE.md
tree
ExecutionKind.SdkSdkkindPROFILE_FORMAT.md
Profile format documentation for backend routing and capability mapsdocs/PROFILE_FORMAT.md
backendPreference,requiredCapabilities,hostPreference,experimentalFeaturesfieldsset_creditsexecution viaSdkkind for FoC profileshostRole,mainModuleSize,workshopMatchCount,selectionScoremetadata
storage
TEST_PLAN.md
Test plan documentation for SDK and backend routing featuresdocs/TEST_PLAN.md
profile variants, runtime mode probes
extender backend tests
commands
backend route diagnostics
LIVE_VALIDATION_RUNBOOK.md
Live validation runbook updates for extender bridge and ROE hard gatedocs/LIVE_VALIDATION_RUNBOOK.md
RequireNonBlockedClassificationflagextender credits path evidence
source-corpus.md
Research source corpus documentationtools/research/source-corpus.md
signals
build-fingerprint.md
Build fingerprint contract documentationtools/research/build-fingerprint.md
RESEARCH_GAME_WORKFLOW.md
Game workflow research contract and safety rulesdocs/RESEARCH_GAME_WORKFLOW.md
behavior
README.md
Capability maps directory documentationprofiles/default/sdk/maps/README.md
README.md
README updates for universal auto and research toolingREADME.md
universal_autoto supported profiles listfingerprinting
16 files
Summary by CodeRabbit
New Features
Documentation
Tests
Chores